Home

MLNs

Markov Logic Networks (MLNs) are a framework for statistical-relational learning that combines elements of first-order logic with probabilistic graphical models. Introduced by Rich Richardson and Pedro Domingos in 2006, MLNs attach weights to first-order logic formulas and interpret these as templates for a Markov network. The result is a probabilistic model over possible worlds in which the truth of ground predicates is constrained by the weighted formulas.

Formally, an MLN is a set of pairs (F_i, w_i), where F_i is a formula in first-order

Inference in MLNs seeks marginal or MAP probabilities for query predicates, which is generally intractable for

Learning in MLNs involves estimating the weights w_i from data, typically by maximizing a (pseudo)likelihood or

MLNs have been applied to knowledge base completion, relation extraction, social-network analysis, and other domains where

logic
and
w_i
is
a
real-valued
weight.
Given
a
finite
domain
of
constants,
every
ground
instance
of
a
formula
becomes
a
ground
clause,
and
these
ground
clauses
define
a
Markov
network
with
a
feature
for
each
grounding.
The
probability
of
a
possible
world
x
is
p(x)
=
(1/Z)
exp(sum_i
w_i
N_i(x)),
where
N_i(x)
is
the
number
of
true
groundings
of
F_i
in
x
and
Z
is
a
normalizing
constant.
large
networks.
Practitioners
use
approximate
methods
such
as
MC-SAT,
Markov
chain
Monte
Carlo,
or
belief
propagation,
often
enhanced
by
lifted
inference
that
exploits
symmetries
to
operate
on
groups
of
groundings
simultaneously.
using
gradient-based
methods
with
regularization.
Structure
learning
(selecting
the
formulas
themselves)
is
possible
but
more
computationally
intensive.
relational
structure
and
uncertainty
are
both
present.
They
offer
a
principled
way
to
reason
with
uncertain,
relational
knowledge,
balancing
logical
expressiveness
with
probabilistic
inference.