• Home
  • Research
  • Project
  • Life

Reseach Interest

Ubiquitous Computing

Intelligent Singal Processing, AI-driven Sensing, Human-computer Interaction, Wearables, etc.

One of my interest is to help computers to recognize and even understand the behavior of human. This not only requires novel sensing systems or even multi-sensor deployment, but also needs to design intelligent algorithm like signal processing, pattern recognition, machine learning, etc. The other interest is to combine such sensing technologies with novel interaction applications, such as VR, AR, HRI (Human-Robot Interaction) or health monitoring.

Mobile and Embedded System

Mobile Computing, Wireless Sensor Networks, Internet of Things (IoT), etc.

One of my next two goals in this filed is to optimize system efficiency, including lower-power design, computing efficiency, robustness, real-time response, etc. The second goal is to expand the application of the mobile systems, in the field of health, education, personal security, IoT and so on.

Research Experience

Underwater Messaging Using Mobile Devices

Accepted to SIGCOMM'22

Advisor: Prof. Shyam, University of Washington

We present the first software-only and acoustic-based system that enables underwater messaging on commodity mobile devices. we designed a communication system that in real-time adapts to variations in frequency responses and SNR across mobile devices and environment, changes in multipath due to mobility.

NeckFace: Continuously Tracking Full Facial Expressions by Deep Learning the infrared images of the chin and face from Neck-mounted wearables

Published in IMWUT'21

Advised by Prof. Cheng Zhang, Prof. Francois Guimbretiere, Cornell University

We present the first neck-mounted wearable system that can continuously track the full facial expressions and 3D head rotations. We deployed a wearable infrared camera with a customized data processing and deep learning pipeline, which enables highly robustness even in complex background or motion scenario like walking.

Video here

HybridTrak: Adding Full-Body Tracking to VR Using an Off-the-Shelf Webcam

Published in CHI'22

Advisor: Prof. James Landay, and Prof. Monica Lam, HCI Group, Stanford University

We proposed a novel body tracking system, AutoTrack, which provides a real-time, precise, and easy-to-setup full-body tracking solution to almost all VR users with a single off-the-shell webcam. To achieved the absolute and precise full body tracking in VR, we fused the data from VR controller and a single webcam with least squares fitting (LSF) and end-to-end neural network (NN).

VibroSense: Recognizing Home Activities by Deep Learning Subtle Vibrations on an Interior Surface of a House from a Single Point Using Laser Doppler Vibrometry

Published in IMWUT'20

Advisor: Prof. Cheng Zhang, SciFi Lab, Cornell University

VibroSense: We developed an indoor activities sensing system using a laser Doppler vibrometer and deep learning. VibroSense demonstrated that the subtle structural vibrations captured from a single point on the wall can recognize the indoor activities throughout the entire house with 96.6% accuracy.

C-Face: Continuously Reconstructing Facial Expressions by Deep Learning Contours of the Face with Ear-mounted Miniature Cameras

Published in UIST'20

Advised by Prof. Cheng Zhang, Prof. Francois Guimbretiere, Cornell University

C-Face: We Developed the first wearable sensing device using minimally invasive common form factors that can continuously reconstruct full facial expressions. Specifically, we implemented convolutional neural networks to continuously reconstruct the facial movement by learning the deformation of the facial contour captured by the ear-mounted camera.

Video here

VLID: Visible Light Backscatter Communication System for Battery-free Internet-of-Things

Accepted by ACM/IEEE Transactions on Networking

Advisor: Prof. Chenren Xu, Peking University

VLID: We design, implement and evaluate VLID, a practical visible light backscatter communication system that features a sub-mW retroreflective uplink and near-zero power downlink for enabling truly battery-free IoT applications

Indoor Air Quality Monitoring System with Cooperative Robots Using Reinforcement Learning

Advisor: Prof. Kaigui Bian, Center for Network, Peking University

We designed an indoor fine-grained air quality(IAQ) detecting system using mobile robots for the large indoor environment. Utilized data-driven approach to train a reinforcement learning model to optimize the cooperative detecting strategy for the group of the robots, by minimizing the total detecting and estimating error.

Video here