Home

lle

Locally Linear Embedding (LLE) is a non-linear dimensionality reduction technique used to uncover low-dimensional structure in high-dimensional data. It was introduced by Sam Roweis and Lawrence Saul in 2000 and is part of the family of manifold learning methods. LLE assumes that data points lie on or near a smooth low-dimensional manifold and that local neighborhoods on the manifold can be approximated by linear relationships.

The algorithm operates in two main stages. First, for each data point x_i in a dataset X

In the second stage, LLE seeks low-dimensional points y_i in R^d that preserve these local relationships. It

LLE is non-parametric, meaning it does not learn an explicit mapping from high- to low-dimensional space. Disadvantages

of
size
n,
LLE
identifies
its
k
nearest
neighbors
and
computes
weights
w_i
that
express
x_i
as
a
linear
combination
of
its
neighbors:
minimize
||x_i
−
sum_j
w_ij
x_j||^2
subject
to
the
constraint
that
sum_j
w_ij
=
1,
with
w_ij
=
0
if
j
is
not
in
the
neighborhood
of
i.
This
yields
a
weight
matrix
W
containing
all
local
reconstruction
weights.
minimizes
the
cost
sum_i
||y_i
−
sum_j
w_ij
y_j||^2,
which
can
be
written
as
Y^T
M
Y
where
M
=
(I
−
W)^T(I
−
W).
This
reduces
to
an
eigenvalue
problem,
and
the
embedding
is
formed
from
the
eigenvectors
corresponding
to
the
smallest
nonzero
eigenvalues
(excluding
the
trivial
constant
vector).
include
sensitivity
to
the
neighborhood
size
k,
difficulty
handling
non-local
structure,
and
computational
cost
for
large
data.
It
remains
popular
for
visualization
and
exploratory
data
analysis,
and
several
variants
address
out-of-sample
extension
and
scalability.