Home

fingertracking

Fingertracking refers to the estimation and monitoring of the position, orientation, and movement of the fingers, typically as part of hand tracking. It aims to infer finger joints and articulations for interaction or analysis, and can be pursued with various sensors and algorithms.

Approaches include marker-based methods, where markers or data gloves with sensors are attached to fingers, and

Applications of fingertracking include enabling hands-free interaction in augmented and virtual reality, gesture control for devices,

Challenges involve occlusion where fingers hide each other, fast motion, variability in hand size and shape,

Common evaluation metrics include mean per-joint position error and percentage of correct keypoints. Research trends emphasize

markerless
methods
using
computer
vision
and
depth
sensing.
Depth
cameras
and
stereo
setups
capture
3D
information
that
is
combined
with
anatomical
finger
models
and
kinematic
constraints
to
estimate
finger
joints.
Deep
learning-based
monocular
methods
predict
2D
or
3D
finger
joint
positions
from
single
images,
often
trained
on
synthetic
or
real
data.
Some
systems
fuse
multiple
modalities,
integrating
inertial
sensors
from
gloves
with
vision
data
to
improve
robustness.
sign
language
recognition,
teleoperation
of
robots,
and
rehabilitation
or
sports
analysis.
lighting
conditions,
and
the
need
for
real-time
processing.
Public
datasets
for
fingertracking
and
hand
pose
research,
such
as
FreiHAND,
the
Rendered
Hand
Pose
Dataset
(RHD),
NYU
Hand
Pose
Dataset,
and
DexYCB,
provide
finger
annotations
useful
for
benchmarking,
though
many
emphasize
full-hand
pose.
real-time
robustness
in
unconstrained
settings,
multi-view
fusion,
and
integration
with
tactile
or
haptic
feedback
to
enhance
interaction.