Home

LogLikelihood

Loglikelihood, or log-likelihood, is the natural logarithm of the likelihood function used in statistics to quantify how probable a set of observed data is under a statistical model with fixed parameters. For data x = (x1, ..., xn) and a model with parameter θ, the likelihood is L(θ) = p(x1, ..., xn | θ). If observations are independent, L(θ) = ∏i p(xi | θ), and the log-likelihood is l(θ) = log L(θ) = ∑i log p(xi | θ). The log transformation turns products into sums, aiding numerical stability and ease of differentiation.

Maximizing the log-likelihood with respect to θ yields the maximum likelihood estimates (MLEs) of the parameters. Because

A common example is data from a normal distribution with unknown mean μ and known variance σ^2. The

Log-likelihood is central to model comparison and hypothesis testing. The likelihood ratio test uses the statistic

Notes: the log-likelihood depends on the assumed model; misspecification can bias results. For complex or non-iid

l(θ)
is
often
easier
to
differentiate
than
L(θ),
optimization
is
typically
performed
on
the
log-likelihood.
Under
standard
regularity
conditions,
the
MLE
is
consistent
and
asymptotically
normal,
with
variance
approximated
by
the
inverse
of
the
Fisher
information.
log-likelihood
is
l(μ)
=
-n/2
log(2πσ^2)
-
(1/(2σ^2))
∑
(xi
-
μ)^2,
whose
maximizer
is
μ̂
=
x̄.
When
variance
is
also
unknown,
the
log-likelihood
includes
an
additional
term
for
σ^2.
-2
log(λ),
where
λ
is
the
ratio
of
restricted
to
unrestricted
likelihoods,
which
under
regularity
conditions
follows
a
chi-squared
distribution.
Information
criteria
such
as
AIC
and
BIC
use
-2
times
the
log-likelihood
plus
penalties
to
compare
models.
data,
numerical
optimization
and
profile
likelihood
methods
are
commonly
employed.