Home

overrelaxation

Overrelaxation is a technique used to accelerate the convergence of certain stationary iterative methods for solving linear systems and discretized partial differential equations. It works by introducing a relaxation parameter, usually denoted by omega (ω), that scales the update step to either dampen or amplify the new information obtained in each iteration.

In the context of solving Ax = b, a common setting is to start from a basic iterative

Convergence depends on problem structure. For symmetric positive definite matrices, the overrelaxed Gauss-Seidel (or the broader

Applications and limitations: Overrelaxation is widely used to accelerate convergence in solving large sparse linear systems

method
such
as
Gauss-Seidel,
which
produces
an
updated
vector
x^{k+1}_{GS}.
The
overrelaxation
update
then
combines
the
old
iterate
x^k
with
the
new
Gauss-Seidel
update:
x^{k+1}
=
(1−ω)
x^k
+
ω
x^{k+1}_{GS}.
When
ω
=
1,
this
reduces
to
the
standard
Gauss-Seidel
method;
when
ω
>
1,
the
method
applies
overrelaxation
that
can
speed
up
convergence;
when
ω
<
1,
it
is
known
as
underrelaxation
and
tends
to
slow
the
process
but
can
improve
stability
for
some
problems.
successive
over-relaxation
method)
converges
for
0
<
ω
<
2.
More
generally,
convergence
and
the
rate
depend
on
the
spectral
properties
of
the
iteration
matrix,
and
the
optimal
ω
minimizes
the
spectral
radius.
However,
the
optimal
value
is
problem-dependent
and
often
found
empirically
rather
than
analytically.
arising
from
discretized
elliptic
PDEs,
such
as
the
Poisson
equation,
and
in
other
scientific
computing
contexts.
Choosing
ω
requires
experimentation;
too
large
a
value
can
cause
divergence,
while
a
well-chosen
ω
can
substantially
reduce
iteration
counts.