Powerlaw
Power-law distributions describe quantities where large events are rare but not negligible. In a power-law distribution, the probability that a random variable X takes the value x (for x above a lower bound xmin) is proportional to x^-α, with exponent α > 1. The probability density is p(x) = (α-1) xmin^(α-1) x^-α for x ≥ xmin (continuous case); the discrete form uses a similar normalization over integers x ≥ xmin. The tail is heavy and scale-invariant: if x is rescaled by a constant, probabilities scale by a power of that constant. The cumulative distribution P(X ≥ x) follows ~ x^{1-α} for large x.
Common in empirical data are quantities such as city sizes, word frequencies (Zipf’s law), wealth distributions,
Fitting and testing: estimate α and xmin from data, typically by maximum likelihood estimation with xmin chosen
Limitations: finite sample size and measurement limits can bias results; many datasets that appear to follow
See also: Zipf’s law, Pareto distribution, Clauset–Shalizi–Newman method, scale invariance.