Home

Powerlaw

Power-law distributions describe quantities where large events are rare but not negligible. In a power-law distribution, the probability that a random variable X takes the value x (for x above a lower bound xmin) is proportional to x^-α, with exponent α > 1. The probability density is p(x) = (α-1) xmin^(α-1) x^-α for x ≥ xmin (continuous case); the discrete form uses a similar normalization over integers x ≥ xmin. The tail is heavy and scale-invariant: if x is rescaled by a constant, probabilities scale by a power of that constant. The cumulative distribution P(X ≥ x) follows ~ x^{1-α} for large x.

Common in empirical data are quantities such as city sizes, word frequencies (Zipf’s law), wealth distributions,

Fitting and testing: estimate α and xmin from data, typically by maximum likelihood estimation with xmin chosen

Limitations: finite sample size and measurement limits can bias results; many datasets that appear to follow

See also: Zipf’s law, Pareto distribution, Clauset–Shalizi–Newman method, scale invariance.

solar
flares,
and
network
degree
distributions
(often
described
as
scale-free).
For
finite
moments,
α
must
exceed
certain
values:
the
mean
is
finite
if
α
>
2,
the
variance
finite
if
α
>
3.
by
minimizing
the
Kolmogorov–Smirnov
distance
between
the
data
and
the
fitted
model.
For
continuous
data,
α_hat
=
1
+
n
[sum_{i:
x_i≥xmin}
ln(x_i
/
xmin)]^{-1}.
Discrete
data
require
adapted
estimators.
Goodness-of-fit
can
be
evaluated
with
KS
tests;
model
comparison
with
likelihood
ratio
tests
against
alternative
heavy-tailed
distributions
(e.g.,
log-normal,
exponential).
a
power
law
may
be
better
described
by
other
heavy-tailed
distributions.
Robust
claims
require
rigorous
testing
and
transparent
data.