Home

MaximumLikelihoodSchätzung

Maximum likelihood estimation is a method for estimating the parameters of a statistical model by choosing the values that maximize the likelihood of the observed data under the model. The goal is to select parameter values that make the observed data most probable, given the assumed distribution.

Consider data x1, ..., xn drawn independently from a distribution with density f(x; theta). The likelihood is

Maximization is often performed on the log-likelihood rather than the likelihood because the log function is

Key properties arise under regularity conditions: MLEs are consistent (converge to the true parameter as sample

Example: estimating the mean mu of a normal distribution with known variance sigma^2 yields mu_hat = x_bar.

Limitations include sensitivity to model misspecification, potential small-sample bias, and non-uniqueness or boundary issues in some

L(theta)
=
∏
f(xi;
theta)
and
the
log-likelihood
is
l(theta)
=
∑
log
f(xi;
theta).
The
maximum
likelihood
estimator
(MLE)
is
theta_hat
=
argmax_theta
L(theta)
=
argmax_theta
l(theta).
In
many
cases,
the
MLE
is
found
by
solving
score
equations,
while
in
others
numerical
optimization
is
required.
monotone,
turning
products
into
sums
and
simplifying
calculations.
When
closed-form
solutions
do
not
exist,
algorithms
such
as
gradient
ascent,
Newton-Raphson,
or
specialized
methods
like
the
expectation-maximization
(EM)
algorithm
are
employed,
especially
in
models
with
latent
variables.
size
grows),
asymptotically
normal,
and
efficient
in
large
samples
(attaining
the
Cramér–Rao
lower
bound).
They
are
invariant
under
smooth
reparametrizations:
if
theta_hat
maximizes
l,
then
g(theta_hat)
maximizes
the
log-likelihood
for
the
transformed
parameter
g(theta).
In
practice,
MLEs
may
require
numerical
methods,
and
the
presence
of
nuisance
parameters
can
lead
to
using
profile
likelihoods.
problems;
prior
information
is
not
incorporated
unless
a
Bayesian
framework
is
used.