Home

ratedistortiontheorie

Rate-distortion theory is a branch of information theory that analyzes the trade-off between the bitrate required to compress a source and the fidelity of its reconstructed version. It formalizes lossy compression by defining a distortion measure d(x, x̂), a source X with a probability law, and a reconstruction X̂ produced by a coder-decoder pair. For block codes of length n, the average distortion D = E[d(X^n, X̂^n)], and the rate R is the average number of bits per source symbol.

The central object is the rate-distortion function R(D), defined as the minimum attainable rate for a given

Shannon established the rate-distortion theorem: for any R > R(D) there exist codes of length n achieving

Extensions include rate distortion with side information at the decoder (Wyner–Ziv), and multi-terminal rate-distortion problems. Algorithmic

Applications span lossy data compression (audio, image, video), communications, and multimedia streaming, where one trades bitrate

distortion
level
D.
For
memoryless
sources,
R(D)
can
be
expressed
by
a
single-letter
formula:
R(D)
=
min
I(X;
Y)
over
conditional
distributions
p(y|x)
such
that
E[d(X,
Y)]
≤
D,
where
Y
is
the
reconstruction.
distortion
arbitrarily
close
to
D,
and
no
code
with
rate
below
R(D)
can
guarantee
distortion
below
D
as
n
grows.
This
provides
operational
meaning
to
R(D).
In
practice,
distortion
measures
include
squared
error
and
Hamming
distance;
for
Gaussian
sources
with
squared
error,
the
rate-distortion
function
admits
a
closed
form
via
reverse
water-filling.
approaches
include
scalar
and
vector
quantization
and
lattice-based
methods.
against
perceptual
or
mathematical
fidelity.