Home

PowerLaws

Power laws describe relationships in which a quantity varies as a power of another. In its simplest form, a variable x follows a power law if its probability density function or frequency obeys P(x) ∝ x^-α for x ≥ x_min, with α > 0. This form implies scale invariance: multiplying x by a constant rescales P(x) by a predictable factor, leaving the functional form essentially unchanged. Power-law distributions can be continuous (Pareto distribution) or discrete (Zipf's law).

The exponent α governs tail heaviness: smaller α yields a fatter tail. For a continuous density, the mean

Common examples include wealth distributions (Pareto), word frequencies (Zipf's law), city sizes, earthquake energies, and the

Identification and estimation involve fitting with maximum likelihood methods that incorporate a lower bound x_min, followed

Origins and mechanisms include multiplicative growth processes, preferential attachment in networks, self-organized criticality, and scaling symmetries

is
finite
only
if
α
>
2,
and
the
variance
is
finite
only
if
α
>
3.
The
lower
cutoff
x_min
is
often
required
because
real
data
do
not
extend
to
zero
and
to
stabilize
estimates.
Power
laws
are
typically
identified
in
broad
ranges
of
x,
sometimes
spanning
several
orders
of
magnitude.
degree
distributions
of
many
networks,
which
often
show
power-law
tails
over
several
orders
of
magnitude.
These
forms
have
been
observed
in
diverse
fields,
and
different
mechanisms
can
produce
similar
tail
behavior.
by
goodness-of-fit
tests
and
comparisons
with
alternative
heavy-tailed
models.
Log-log
plots
can
be
misleading
with
small
samples
or
improper
cutoffs,
so
rigorous
statistical
testing
is
recommended.
in
complex
systems.
Not
all
heavy-tailed
data
are
power
laws;
distinguishing
them
from
lognormal
or
stretched
exponential
tails
requires
careful
analysis.