Wristp²
A Wrist-Worn System for Hand Pose and Pressure Estimation

WristP² empowers the subtlety of a fingertip's caress, and unleashes the
power of a palm's command. We refuse to be defined by the 'click'.
It's time to return the sovereignty of interaction to your hands.
Full video on
YouTube
Timeframe
Jul-Sep2025
Role
Experimenter
Designer
Publication
PDF
Author rank
Co-Author
Academic recognition
ACM CHI '26 Revise
(RR, ARR, A, RR)
*This paper is submitted to the CHI Conference on Human Factors in Computing Systems (CHI ’26). It is currently under review and temporarily open for application purpose. People who see this page are kindly requested not to spread it.
  Motivation
A "click" action in the physical world can carry countless nuances—a tentative tap, a decisive press, a forceful squeeze—yet in the digital world, they are all reduced to the same binary signal. This represents a significant loss of information.

We aim to transform the movements of a single hand into a high-bandwidth input channel with "force and texture," opening up more nuanced expressive possibilities for AR, wearables, and everyday interactions.
*The infinite possibilities of gestures conveyed by hands.
  Objective

WristP2 is a wrist-worn system with a wide-FOV fisheye camera that reconstructs 3D hand pose and per-vertex pressure in real time.
*Across previous works from 2012–2025, the average reported performance is roughly MPJPE ≈ 10.5 mm, MJAE ≈ 7.1°, and contact accuracy ≈ 92%.
2.88 mm
mean per-joint position error
3.15°
mean joint angle error
10.3 gram
mean pressure error
97 %
contact accuracy
3 h
battery life at full power
72%
Contact IoU
  Application
Utilizing the detailed hand pose and pressure information generated by WristP2, we have effectively applied it across a wide spectrum of scenarios.
Mid-air gesture input in XR

WRISTP2 enables mid-air gestures for natural interaction.
The video demonstrates how mid-air hand gestures can be used to control video playback in XR environments.
Custom Action Control Media

WRISTP2 knows the meaning of all your actions.The video shows how WristP2 supports large-screen interaction in mobile contexts, such as controlling slide presentations during a talk.
Virtual touchpad input on a mobile device

WRISTP2 enables planar virtual touchpad input.The video illustrates how a user browses web pages on a virtual display by using planar input on a virtual touchpad.
Replacing traditional interaction tools

WRISTP2  can replace traditional interactive tools.The video demonstrates how it achieves left and right mouse clicks, which can theoretically be performed on any surface.
  Hardware Implementation
The system consists of a nylon wristband, an RGB camera, and a Raspberry Pi Zero 2W. To overcome field-of-view limitations of vision-based sensing, we adopt a fisheye RGB camera module (180° FOV) mounted on the palmar side of the wrist via a 3D-printed, magnetized 90° rotatable hinge positioned ∼15 mm above the skin.
  Dataset:
We built a synchronized multi-sensor system to create a large-scale dataset of 93,000 frames from 15 participants.

· Synced Sensors: We combined professional Motion Capture (for precise hand tracking) with a high-resolution Pressure Pad (to record touch   force). This allowed us to capture both 3D hand poses and physical pressure simultaneously.

‍· Diverse Scenarios: The dataset covers 48 surface interactions (like clicking and dragging) and 28 mid-air gestures, recorded under various   lighting conditions and backgrounds to ensure the system works reliably in the real world.
We developed an automated pipeline that aligns a 3D hand model with sensor data to generate high-fidelity meshes and per-point pressure labels without manual annotation.
  Tech Stack:
Our AI uses a Vision Transformer combined with a VQ-VAE, which reconstructs hands by selecting from a learned "library" of realistic poses to ensure biological accuracy. It simultaneously predicts 3D shape, pressure, and camera position from a single image.
* Example of the wrist-mounted camera image with random replaced background
To ensure robustness, we pre-trained the model on general hand data and then fine-tuned it with randomized backgrounds, teaching it to ignore environmental distractions.
  User Study:
Indoor scenarios (Study A)
In three studies we conducted, we chose smartphone touchscreens as the baseline input method and compared PalmTrack with it. Smartphone touchscreens were selected because they provide absolute positioning capabilities and their screen size is comparable to that of a human hand.
Study 1: Virtual Air Mouse (Mid-Air Pointing)

We validated the system's pointing precision using a standard Fitts’ Law task, where participants controlled a cursor via index finger position and performed clicks using a pinch gesture. The system achieved a throughput of 2.5 bits/s, matching the efficiency of a standard laptop touchpad and proving its viability as a precise, equipment-free input device for mid-air interaction.
Study 2: Multi-Finger Pressure Control (Fine-Grained Sensing)

This study evaluated the ability to estimate force across 13 randomized finger combinations, ranging from single touches to full-hand presses. Despite significant self-occlusion inherent in wrist-worn views, participants successfully maintained specific pressure targets with an 86.7% success rate, demonstrating the model's robustness in distinguishing and measuring multi-point contact forces.
Study 3: Virtual Pressure-Sensitive Touchpad (Composite Interaction)

To simulate complex real-world workflows, we transformed a bare desktop into a virtual pressure-sensitive touchpad requiring simultaneous 2D dragging and precise vertical pressure holds. The system achieved an exceptional 98.0% success rate in this composite task, confirming it can reliably decouple pose tracking from pressure estimation to support stable, surface-agnostic control.
  Acknowledgments:
Sincerely thank the user study participants for their positive teamwork and the reviewers in CHI2026 for their valuable and supportive feedback.
ALL
AI
Hardware
SoftEng
UX
WEB
PalmTrack
SoftEng
Hardware
ALL
Pebbl
AI
SoftEng
UX
ALL
Wristp^2
Hardware
SoftEng
ALL
Eatinter
SoftEng
ALL
Beyond Qomolangma
UX
WEB
ALL
Alibaba : Xing Yun AI
UX
AI
ALL
Mercedes-Benz: China one app
UX
ALL
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.