Home

lowdiscrepancy

Low-discrepancy refers to a property of certain deterministic sequences designed to fill the unit hypercube more uniformly than typical random samples. Such sequences are used in quasi-Monte Carlo methods to approximate integrals and perform sampling in multiple dimensions. The goal is to minimize discrepancy, a quantitative measure of how far the empirical distribution of sample points is from the uniform distribution.

Discrepancy is commonly measured by the star discrepancy, D*_N, which captures the largest difference between the

Notable examples of low-discrepancy sequences include Halton sequences, Sobol sequences, and Niederreiter sequences, as well as

Applications span numerical integration in finance, physics, and engineering; computer graphics for global illumination and rendering;

fraction
of
points
falling
inside
axis-aligned
boxes
[0,u)
and
the
volume
of
those
boxes.
The
Koksma-Hlawka
inequality
relates
the
integration
error
of
a
function
to
the
product
of
the
function’s
total
variation
and
the
sequence’s
discrepancy,
guiding
the
use
of
low-discrepancy
points
in
numerical
integration.
In
many
constructions,
the
discrepancy
scales
as
O((log
N)^s
/
N),
improving
over
the
O(N^(-1/2))
rate
typical
of
random
sampling,
at
least
for
moderate
dimensions
and
smooth
integrands.
Faure
sequences
and
various
digital
nets.
Many
practical
implementations
employ
scrambling
or
randomization
to
preserve
uniformity
while
enabling
error
estimation
and
statistical
analysis.
and
broader
computational
tasks
requiring
efficient
multi-dimensional
sampling.
While
powerful
in
many
settings,
the
advantage
of
low-discrepancy
methods
can
diminish
as
dimensionality
grows
or
integrands
become
highly
irregular.