Home

LagrangeDual

Lagrange duality is a framework in optimization that associates with a given primal problem a secondary problem, the dual, whose solution provides bounds on the primal optimum and, under suitable conditions, equals it. Consider a standard minimization problem: minimize f0(x) subject to hi(x) ≤ 0 for i = 1,...,m and gj(x) = 0 for j = 1,...,p. The Lagrangian is defined as L(x, λ, ν) = f0(x) + sum_i λ_i hi(x) + sum_j ν_j gj(x), where λ ≥ 0 are nonnegative multipliers and ν are free.

From the Lagrangian, the dual function g(λ, ν) = inf_x L(x, λ, ν) is obtained by minimizing L over x.

Key implications include that optimal Lagrange multipliers quantify the sensitivity of the optimal value to changes

Lagrange duality is foundational in convex optimization and is used for bound computation, problem decomposition, and

The
dual
problem
then
seeks
to
maximize
g(λ,
ν)
with
respect
to
λ
≥
0
and
ν
free.
By
construction,
g
≤
f0*
(the
primal
optimum),
so
the
dual
provides
a
lower
bound
on
the
primal
objective
in
a
minimization
setting
(weak
duality).
The
difference
between
primal
and
dual
optimum
is
the
duality
gap;
it
vanishes
under
strong
duality,
which
holds
for
convex
problems
satisfying
a
constraint
qualification
such
as
Slater's
condition.
in
constraint
bounds,
and
that
the
dual
function
is
concave
in
(λ,
ν)
even
when
the
primal
is
not
convex.
The
KKT
conditions
tie
primal
and
dual
optima
together:
stationarity,
primal
feasibility,
dual
feasibility,
and
complementary
slackness.
algorithm
design
(subgradient
methods,
augmented
Lagrangian
methods,
ADMM).
It
also
appears
in
various
applications
across
operations
research,
machine
learning,
and
economics.