Home

PowerIteration

Power iteration, or the power method, is a simple iterative algorithm in numerical linear algebra for approximating the dominant eigenvector and eigenvalue of a square matrix A. It begins with a nonzero vector x0 and repeatedly applies A, normalizing after each multiplication: xk = A xk-1 / ||A xk-1||. Under suitable conditions, xk converges to the eigenvector associated with the eigenvalue of largest magnitude, λ1, and the Rayleigh quotient μk = xk^T A xk (or the norm ||A xk||) converges to λ1.

Requirements and behavior: A should have a unique dominant eigenvalue in magnitude and, typically, be diagonalizable

Complexity and convergence: for dense matrices, each iteration costs O(n^2); for sparse matrices, it costs O(nnz(A)).

Applications: widely used to compute the leading eigenvector in data analysis, basic forms of Principal Component

for
clean
convergence.
If
the
dominant
eigenvalue
is
repeated
or
lies
on
the
unit
circle,
convergence
may
be
slow
or
fail.
Variants
address
these
issues:
shift-and-invert
power
iteration
applies
(A
−
σI)−1
to
accelerate
convergence
toward
a
chosen
eigenvalue
near
σ;
inverse
power
iteration
targets
eigenvalues
near
zero
after
an
appropriate
shift;
deflation
can
compute
additional
eigenvectors
after
the
dominant
one
is
found.
Convergence
is
linear,
with
rate
roughly
|λ2/λ1|,
where
λ2
is
the
second-largest
eigenvalue
in
magnitude.
Analysis,
and
in
algorithms
like
PageRank
that
rely
on
the
dominant
eigenvector
of
a
stochastic
matrix.