Home

Eigenmaps

Eigenmaps are a class of dimensionality reduction methods that produce low-dimensional embeddings of high-dimensional data by using eigenvectors of a Laplacian-like operator derived from the data. The approach sits at the intersection of spectral graph theory and manifold learning, and its best-known instantiation, Laplacian Eigenmaps, was introduced by Belkin and Niyogi in 2003. The goal is to preserve local geometric structure of the data, reflecting the idea that data lie on or near a lower-dimensional manifold.

Construction typically begins with building a weighted, undirected graph from the data points. Vertices correspond to

Properties and applications: The embedding minimizes the objective sum_{i,j} W_ij ||y_i − y_j||^2 subject to suitable constraints,

data
points,
and
weights
W_ij
encode
similarity
(for
example,
using
a
Gaussian
kernel
on
nearby
points
or
by
connecting
k-nearest
neighbors).
Let
D
be
the
diagonal
degree
matrix
with
D_ii
=
sum_j
W_ij.
The
graph
Laplacian
is
L
=
D
−
W,
or
one
may
use
the
normalized
form
L_sym
=
I
−
D^(−1/2)
W
D^(−1/2).
The
embedding
is
obtained
by
solving
the
generalized
eigenproblem
L
y
=
λ
D
y
and
selecting
the
eigenvectors
corresponding
to
the
smallest
nonzero
eigenvalues.
If
one
takes
the
c
smallest
nontrivial
eigenvectors,
the
coordinates
for
point
i
are
given
by
(v_2(i),
...,
v_{c+1}(i)).
thus
promoting
similar
points
to
have
similar
coordinates.
This
makes
Laplacian
Eigenmaps
attractive
for
visualization
(e.g.,
2D
or
3D)
and
as
a
preprocessing
step
for
machine
learning.
Variants
include
different
normalizations
of
the
Laplacian
and
neighborhood
schemes;
practical
use
requires
choices
about
scale
and
graph
construction,
and
extending
the
embedding
to
new
data
points
or
scaling
to
large
data
sets
can
be
challenging.
Eigenmaps
provide
a
principled,
spectral
approach
to
capturing
nonlinear
geometric
structure
in
data.