EMalgoritme
The EM algorithm, or Expectation-Maximization algorithm, is an iterative method for maximum likelihood estimation in statistical models with latent variables or incomplete data. It was introduced by Dempster, Laird, and Rubin in 1977 and has since become a standard tool in statistics and machine learning. In Dutch, it is commonly called the EM-algoritme.
The algorithm alternates between two steps. In the expectation step (E-step), it computes the expected value
EM is particularly useful when the model assumes latent structure or missingness, such as Gaussian mixture
Convergence properties and limitations: the observed-data likelihood is non-decreasing with iterations, and the algorithm converges to
Variants and extensions include ECM, ECME, SEM, stochastic or online EM, and variational-EM approaches that address