Home

PosteriorMean

Posterior mean refers to the expected value of a parameter θ under its posterior distribution after observing data. In Bayesian inference, the posterior distribution p(θ|D) is proportional to the product of the likelihood p(D|θ) and the prior p(θ): p(θ|D) ∝ p(D|θ) p(θ). The posterior mean is defined as E[θ|D] = ∫ θ p(θ|D) dθ.

As a point estimator, the posterior mean is the Bayes estimator of θ under squared error loss, since

Computation of the posterior mean depends on the model. In conjugate models, closed-form expressions often exist.

The posterior mean differs from the maximum a posteriori estimate, which is the posterior mode. It also

it
minimizes
the
posterior
expected
squared
error.
It
naturally
combines
prior
beliefs
with
observed
data,
leading
to
shrinkage
toward
the
prior
when
data
are
limited
or
noisy,
and
toward
the
data
as
information
increases.
For
example,
in
a
normal
model
with
known
variance
σ^2
and
a
normal
prior
θ
~
N(μ0,
τ0^2),
the
posterior
is
θ|D
~
N(μn,
τn^2)
with
τn^2
=
1/(n/σ^2
+
1/τ0^2)
and
μn
=
τn^2
(n
x̄/σ^2
+
μ0/τ0^2);
the
posterior
mean
is
μn.
In
a
Beta-Binomial
model
with
θ
~
Beta(α,
β)
and
k
successes
in
n
trials,
θ|D
~
Beta(α+k,
β+n−k),
giving
a
posterior
mean
(α+k)/(α+β+n).
For
non-conjugate
cases,
numerical
methods
such
as
quadrature,
Markov
chain
Monte
Carlo,
or
variational
inference
are
used.
informs
Bayesian
predictive
distributions
when
integrating
over
the
posterior
rather
than
conditioning
on
a
single
point
estimate.