Home

EMalgoritme

The EM algorithm, or Expectation-Maximization algorithm, is an iterative method for maximum likelihood estimation in statistical models with latent variables or incomplete data. It was introduced by Dempster, Laird, and Rubin in 1977 and has since become a standard tool in statistics and machine learning. In Dutch, it is commonly called the EM-algoritme.

The algorithm alternates between two steps. In the expectation step (E-step), it computes the expected value

EM is particularly useful when the model assumes latent structure or missingness, such as Gaussian mixture

Convergence properties and limitations: the observed-data likelihood is non-decreasing with iterations, and the algorithm converges to

Variants and extensions include ECM, ECME, SEM, stochastic or online EM, and variational-EM approaches that address

of
the
log-likelihood
of
the
complete
data,
given
the
observed
data
and
current
parameter
estimates.
This
involves
computing
the
conditional
distribution
of
the
latent
variables
given
the
data.
In
the
maximization
step
(M-step),
it
maximizes
this
expected
complete-data
log-likelihood
with
respect
to
the
model
parameters,
producing
updated
estimates.
models,
hidden
Markov
models,
factor
analysis,
or
clustering
with
incomplete
records.
In
a
Gaussian
mixture
model,
for
instance,
the
E-step
assigns
probabilistic
component
memberships
(responsibilities),
and
the
M-step
updates
component
weights,
means,
and
variances.
a
stationary
point,
typically
a
local
maximum.
It
does
not
guarantee
a
global
optimum
and
can
be
slow
to
converge
near
the
optimum.
The
method
relies
on
the
availability
of
closed-form
updates
in
the
M-step;
when
such
updates
are
not
available,
numerical
optimization
or
Monte
Carlo
variants
may
be
used.
scalability
and
more
complex
latent
structures.