Home

GaussNewton

Gauss–Newton is an iterative optimization algorithm used to solve non‑linear least‑squares problems, where the objective is to minimize the sum of squared residuals between observed data and a model function. The method was independently developed by Carl Friedrich Gauss and Isaac Newton’s descendants in the 19th century, and it builds on the linear least‑squares solution by approximating the Hessian matrix with the product of the Jacobian transpose and the Jacobian.

Given a vector‑valued model function f(θ) that depends on parameters θ and observations y, the residual vector

The method converges rapidly when the residuals are small and the Jacobian is well‑conditioned, but it may

is
r(θ)=y−f(θ).
The
Gauss–Newton
step
updates
the
parameters
by
solving
the
normal
equations
JᵀJ Δθ=Jᵀr,
where
J
is
the
Jacobian
matrix
of
partial
derivatives
of
the
residuals
with
respect
to
the
parameters.
The
new
estimate
is
θ_{k+1}=θ_k+Δθ.
This
approximation
neglects
second‑order
derivative
terms,
which
makes
the
algorithm
computationally
cheaper
than
full
Newton
methods
while
retaining
quadratic
convergence
near
the
solution
under
certain
regularity
conditions.
diverge
or
converge
slowly
for
highly
non‑linear
models
or
poorly
scaled
parameters.
Variants
such
as
the
Levenberg–Marquardt
algorithm
introduce
damping
to
improve
robustness.
Gauss–Newton
is
widely
used
in
fields
including
computer
vision
(bundle
adjustment),
parameter
estimation
in
engineering,
and
curve
fitting
in
scientific
data
analysis.
Its
simplicity
and
effectiveness
make
it
a
standard
tool
for
solving
many
practical
non‑linear
least‑squares
problems.