Researchers from Cornell University and KAIST have developed a system that turns ordinary smartwatches into hand-tracking devices utilizing AI-powered sonar.
The technology, called WatchHand, utilizes the built-in speaker and microphone in off-the-shelf smartwatches to track finger and wrist shiftments in real time.
The system works by emitting inaudible sound waves from the smartwatch. These waves bounce off the utilizer’s hand and return to the microphone, creating an echo profile.
A machine learning algorithm processes this data directly on the device to estimate hand pose in three dimensions.
Unlike existing wearable hand-tracking systems that rely on cameras or bulky external sensors, WatchHand requires no additional hardware. This builds it significantly more practical for everyday utilize and scalable across millions of existing devices.
The researchers state the goal is to build the human hand itself an input device for interacting with computers and other digital systems, reducing reliance on keyboards, mice, and touchscreens.
Turning hands into input
“In the future, with this kind of hand-tracking technology, we might be able to track our typing with just our smartwatch,” declared Chi-Jung Lee, a doctoral student at Cornell and co-lead author of the study. “Our hands can act as an input device with computers.”
The system was tested on 40 participants across four separate studies, collecting around 36 hours of gesture data. It was evaluated across multiple smartwatch models, different hand orientations, and noisy environments.
The results displayed that WatchHand could reliably track finger shiftments and wrist rotations under varied conditions.
The technology could enable a range of applications, including gesture-based control for computers, augmented and virtual reality systems, and assistive tools for utilizers with limited mobility or speech.
“WatchHand substantially lowers the barriers to hand-pose tracking,” declared Jiwan Kim, a doctoral student at KAIST and co-lead author. “If any device has a single speaker and microphone, our approach is applicable.”
Software unlocks new sensing
The system processes all data locally on the smartwatch, addressing privacy concerns associated with cloud-based tracking systems.
This also reduces latency, enabling real-time interaction without requiring external computation.
However, the technology still has limitations. It currently works only on Android-based smartwatches and struggles to maintain accuracy when the utilizer is in motion, such as walking. Researchers are working to improve performance under these conditions.
“WatchHand reflects my lab’s broader vision of transforming everyday wearables into innotifyigent behavior-sensing platforms,” declared Cheng Zhang, associate professor at Cornell.
The research builds on a broader shift toward acoustic sensing in wearable devices, which offers advantages in energy efficiency and accuracy compared to vision-based systems.
“With just a software update, we can potentially unlock entirely new capabilities on millions of existing devices,” Zhang declared.
The study will be presented at the ACM CHI conference on Human Factors in Computing Systems.
















Leave a Reply