Home

derandomizations

Derandomizations refer to the study and construction of deterministic algorithms or procedures that replicate, with similar efficiency, the results typically obtained by randomized algorithms. The goal is to minimize or remove the use of randomness while preserving correctness and performance in a given computational model.

The core approach to derandomization lies in the hardness vs randomness paradigm. If there exist functions

Important results include unconditional derandomizations for restricted models, such as space-bounded computation, where Nisan built PRGs

In complexity theory, derandomization illuminates the relationship between randomness and computation. It informs the quest to

that
are
hard
to
compute
with
small
circuits,
these
hard
functions
can
be
used
to
build
pseudorandom
generators
that
fool
randomized
algorithms.
If
such
generators
are
effective,
a
randomized
algorithm
can
be
replaced
by
a
deterministic
one
that
uses
a
short,
fixed
set
of
pseudorandom
bits.
Related
tools
include
randomness
extractors,
which
convert
weak
sources
of
randomness
into
nearly
uniform
bits,
and
various
constructions
of
pseudorandom
generators
(PRGs)
and
derandomization
schemes
tailored
to
specific
models,
such
as
space-bounded
computation.
that
fool
polynomial-time
machines
using
only
a
small
seed.
The
broader
hardness
vs
randomness
program
shows
that
if
certain
functions
are
hard
for
small
circuits,
then
efficient
PRGs
exist,
implying
possible
derandomizations
of
broad
classes
like
BPP.
Subsequent
developments,
including
extractor-based
and
complexity-theoretic
constructions,
have
pushed
toward
more
general
derandomization
results,
though
many
questions
remain.
determine
whether
randomness
provides
real
computational
power
or
can
be
effectively
simulated
deterministically.
The
topic
intersects
with
primality
testing,
circuit
lower
bounds,
and
cryptography,
and
it
remains
an
active
area
of
foundational
research
and
model-specific
investigations.