Home

MRFs

A Markov Random Field (MRF) is a collection of random variables indexed by the nodes of an undirected graph, where the graph encodes conditional independence relations between the variables. In an MRF, the probability distribution over all variables factors according to the cliques of the graph, so that the joint distribution can be written as P(x) = (1/Z) ∏_C ψ_C(x_C), where ψ_C are nonnegative potential functions on the variables in clique C and Z is a normalizing constant.

The structure of an MRF is governed by the local Markov property: each variable is conditionally independent

Inference in MRFs involves computing marginal distributions or the most probable configuration (MAP). Exact inference is

Parameter learning in MRFs involves estimating the potential functions ψ_C from data, using techniques such as

MRFs are widely used in computer vision (for image labeling and texture modeling), spatial statistics, physics

of
all
other
variables
given
its
neighbors
in
the
graph.
The
Hammersley-Clifford
theorem
provides
a
link
between
this
local
property
and
the
global
factorization,
showing
that
a
positive
distribution
satisfies
the
local
Markov
property
if
and
only
if
it
factors
over
cliques
as
stated
above.
tractable
only
for
certain
graph
structures
(e.g.,
trees);
on
general
graphs
it
is
typically
intractable,
leading
to
approximate
methods
such
as
belief
propagation
(including
loopy
belief
propagation),
variational
inference,
graph
cuts
for
certain
energy
functions,
or
Markov
chain
Monte
Carlo
methods
like
Gibbs
sampling.
maximum
likelihood
with
pseudo-likelihood,
or
Bayesian
methods.
Structure
learning
aims
to
identify
the
graph
itself.
(e.g.,
Ising-like
models),
and
other
domains
where
symmetric,
undirected
dependencies
among
variables
are
appropriate.
They
are
related
to,
but
distinct
from,
conditional
random
fields,
which
model
P(Y|X)
rather
than
a
joint
distribution
over
all
variables.