Home

roundoff

Roundoff error is the discrepancy that occurs when representing real numbers with finite precision. In computing and numerical analysis, roundoff refers to the errors that arise from rounding operands and results to the available number of digits or bits. These errors are inherent to digital representations and can affect the accuracy of numerical computations.

Rounding modes specify how to map a real number to a representable value. Common modes include round

Key concepts related to roundoff include the unit in the last place (ULP), which is the spacing

Roundoff is distinct from truncation error, which arises when a mathematical expression is approximated by omitting

To mitigate roundoff, techniques include using higher precision, selecting appropriate rounding modes, applying compensated algorithms (such

to
nearest,
round
toward
zero,
round
toward
positive
infinity,
and
round
toward
negative
infinity.
In
many
floating-point
systems,
notably
IEEE
754,
round
to
nearest
with
ties
to
even
(banker's
rounding)
is
the
default.
Decimal
arithmetic
often
uses
round
half
up
or
round
half
away
from
zero.
The
choice
of
mode
can
influence
bias
and
stability
of
computations.
between
adjacent
representable
values,
and
machine
epsilon,
the
upper
bound
on
relative
rounding
error
for
a
given
format.
Roundoff
error
is
the
difference
between
the
exact
result
of
a
calculation
and
the
value
produced
by
finite-precision
arithmetic.
Such
errors
can
accumulate
over
sequences
of
operations
and
affect
the
reliability
of
numerical
algorithms.
terms
or
digits.
In
floating-point
computation,
both
rounding
and
truncation
contribute
to
total
error,
but
rounding
is
the
deliberate
mapping
to
a
limited
set
of
representable
values.
as
Kahan
summation),
or
employing
arbitrary-precision
arithmetic
where
necessary.