Home

rootfinding

Rootfinding, or zero-finding, is the problem of locating values x for which a given function f satisfies f(x) = 0. In one dimension this means finding real roots of a real-valued function; in several dimensions it extends to solving a system F(x) = 0.

A root often exists in an interval if f is continuous on that interval and f(a) and

Common methods

- Bisection: a robust bracketing method that requires a sign change over an interval. It converges linearly

- False position ( regula falsi) and variants such as Illinois or Pegasus: keep a sign change but

- Secant method: derivative-free, uses two initial points to form a secant line. It has superlinear convergence

- Newton-Raphson: uses f and its derivative f′, with updates x_{n+1} = x_n − f(x_n)/f′(x_n). It enjoys quadratic convergence

- Fixed-point iteration: rewrite f(x) = 0 as x = g(x) and iterate x_{n+1} = g(x_n). Convergence depends on |g′(r)|

- Brent’s method: a robust hybrid that combines bisection, secant, and inverse quadratic interpolation for reliable performance.

Extensions and considerations

For systems of equations, multidimensional Newton or quasi-Newton methods and varieties like Broyden’s method are common.

f(b)
have
opposite
signs
(the
intermediate
value
theorem).
For
multiple
roots
or
non-smooth
functions,
existence
and
uniqueness
can
be
more
nuanced
and
may
require
additional
analysis
or
assumptions.
and
is
simple
to
implement
but
may
be
slow.
adjust
the
secant-based
step
to
avoid
stagnation,
improving
practical
convergence.
but
can
fail
if
the
function
is
not
well-behaved.
near
a
simple
root
but
requires
a
good
initial
guess
and
a
nonzero
derivative.
<
1
near
the
root.
Practitioners
choose
stopping
criteria
based
on
the
residual,
the
step
size,
and
predetermined
tolerances,
and
they
consider
function
evaluation
cost,
behavior
near
multiple
roots,
and
potential
divergence.