Home

MetropolisHastings

Metropolis-Hastings is a Markov chain Monte Carlo method used to obtain a sequence of samples from a target probability distribution π(x) for which direct sampling is difficult. The method constructs a Markov chain whose stationary distribution is π, so that long-run samples approximate π. It generalizes the Metropolis algorithm by allowing asymmetric proposal distributions.

The procedure starts from a current state x and draws a candidate y from a proposal distribution

History and theoretical basis: The method originated with the Metropolis algorithm (1953) for simulating Boltzmann distributions

Variants and usage: Common choices for q include Gaussian random-walk proposals and independent proposals. The algorithm

q(y|x).
The
acceptance
probability
is
α(x,
y)
=
min(1,
[π(y)
q(x|y)]
/
[π(x)
q(y|x)]).
The
proposed
move
is
accepted
with
probability
α(x,
y);
otherwise
the
chain
remains
at
x.
This
is
repeated
to
generate
a
sequence
of
states.
If
π
is
known
only
up
to
a
normalizing
constant,
the
ratio
in
α
can
still
be
computed
because
the
constant
cancels.
in
physics,
later
generalized
by
Hastings
(1970)
to
allow
arbitrary
proposal
distributions,
forming
the
Metropolis–Hastings
algorithm.
The
chain
satisfies
detailed
balance
with
respect
to
π
and,
under
mild
conditions,
is
irreducible
and
aperiodic,
ensuring
convergence
to
π
as
the
stationary
distribution.
is
widely
used
in
Bayesian
statistics
to
sample
posterior
distributions,
in
statistical
physics,
and
in
other
fields
requiring
integration
over
high-dimensional
spaces.
It
can
be
combined
with
Gibbs
sampling
(Metropolis-within-Gibbs)
or
adapted
during
the
run
to
improve
efficiency.