Home

extragradient

Extragradient methods are iterative algorithms for solving variational inequalities and related equilibrium problems. The basic extragradient method was introduced by G. M. Korpelevich in 1976 to address variational inequalities with monotone operators. The problem, in its standard form, is to find x in a closed convex set C such that the inner product <F(x), y - x> ≥ 0 for all y in C, where F is a monotone and Lipschitz continuous mapping. This framework encompasses convex-concave saddle point problems and various equilibrium models.

In the typical extragradient algorithm, starting from an initial point x0 in C, the method first computes

The extragradient method is particularly effective when F is monotone but not strongly monotone, a situation

Applications include network equilibrium computations, game-theoretic models, convex-concave min-max problems, and constrained optimization tasks arising in

a
lookahead
point
through
a
projected
step:
y_k
=
P_C(x_k
-
gamma
F(x_k)).
Then
it
updates
x_{k+1}
by
another
projection
using
the
operator
evaluated
at
the
lookahead
point:
x_{k+1}
=
P_C(x_k
-
gamma
F(y_k)).
The
step
size
gamma
is
usually
chosen
in
(0,
1/L)
when
F
is
L-Lipschitz.
where
simple
gradient
projection
methods
may
fail
to
converge.
It
guarantees
convergence
under
standard
assumptions
and
provides
a
robust
approach
for
solving
variational
inequalities,
saddle-point
problems,
and
equilibrium
models.
Various
variants
exist,
including
stochastic
extragradient
methods
for
noisy
or
data-driven
problems,
and
more
recent
optimistic-gradient
and
mirror-descent
extensions
that
improve
performance
in
large-scale
or
online
settings.
economics,
engineering,
and
machine
learning.