Home

PGMs

Probabilistic graphical models (PGMs) are a family of statistical models that use graph-based representations to encode the conditional dependencies among random variables. The graph structure conveys independence relationships, allowing compact specification of joint distributions and facilitating probabilistic reasoning under uncertainty.

PGMs are typically divided into directed graphical models, such as Bayesian networks, and undirected graphical models,

Factor graphs provide a unified representation as bipartite graphs connecting variables to factors. Dynamic Bayesian networks

Inference and learning: common tasks include computing marginal or conditional probabilities; exact inference methods include variable

Learning involves estimating parameters from data (maximum likelihood or Bayesian estimation) and learning structure (the graph

Applications span natural language processing, computer vision, robotics, bioinformatics, and finance. PGMs offer principled handling of

such
as
Markov
networks
and
Markov
random
fields.
In
a
Bayesian
network,
the
joint
distribution
factors
as
the
product
of
conditional
distributions
of
each
variable
given
its
parents,
P(X1,...,Xn)
=
∏i
P(Xi|Parents(Xi)).
In
Markov
networks,
the
distribution
factors
into
a
product
of
potential
functions
over
cliques,
often
written
as
proportional
to
∏k
ψk(Ck).
extend
PGMs
to
time
series
by
modeling
sequences
with
directed
edges
across
time
slices,
enabling
temporal
reasoning
within
the
same
formalism.
elimination
and
junction
tree
algorithms.
Belief
propagation
computes
exact
marginals
on
tree-structured
graphs
and
often
provides
good
approximations
on
graphs
with
cycles.
Approximate
methods
include
Markov
chain
Monte
Carlo
and
variational
inference.
itself)
using
scoring
methods,
such
as
MDL/BIC
or
Bayesian
scores,
or
constraint-based
approaches.
uncertainty
and
diverse
reasoning
capabilities,
but
exact
inference
is
often
intractable
for
large
or
densely
connected
graphs,
necessitating
approximation
techniques.