Home

eyeLAH

eyeLAH is a modular, open-source framework and platform designed to enable gaze-driven interactive media and research. The name blends “eye” with a stylized “LAH” to evoke immediacy and lightness. Conceived by a collaborative team of developers, artists, and researchers, eyeLAH emerged in the early 2020s as a tool for human–computer interaction studies and as a medium for generative art.

Architecture and how it works: eyeLAH provides three layers: a gaze-tracking engine, an input translator, and

Applications: eyeLAH has been used in gallery installations, educational workshops, and accessibility tools for users with

Reception and considerations: Because gaze data can be sensitive, the project includes privacy-focused features such as

a
scene
generator.
The
gaze-tracking
engine
supports
standard
webcams
and
infrared
eye
trackers,
performing
calibration,
blink
suppression,
and
fixation
mapping
to
a
normalized
coordinate
space.
The
input
translator
converts
gaze
data
into
discrete
or
continuous
control
signals,
including
dwell-based
selections
and
smooth
gaze
trajectories.
The
scene
generator
offers
a
plugin-based
environment
where
artists
and
developers
craft
visuals,
soundscapes,
and
haptic
cues
in
response
to
inputs.
The
framework
emphasizes
cross-platform
compatibility
and
can
export
to
desktop,
web,
or
embedded
hardware.
The
project
is
released
under
an
open-source
license.
limited
motor
control.
Researchers
employ
it
to
study
gaze-driven
interaction,
attention,
and
embodiment
in
media
contexts.
on-device
processing,
opt-in
data
collection,
and
anonymization
options.
Reviews
have
praised
eyeLAH
for
openness
and
community
contributions,
while
noting
limitations
related
to
hardware
requirements,
lighting
conditions,
and
calibration
variability.
See
also:
gaze
tracking,
human–computer
interaction,
generative
art.