Home

Heteroskedasticity

Heteroskedasticity is a property of the error terms in a regression model in which the variance of the errors is not constant across observations. In ordinary least squares regression, this violates the assumption of constant variance (homoscedasticity) and makes standard errors unreliable, so t statistics and confidence intervals may be invalid. The OLS estimator of the coefficients often remains unbiased and consistent under standard exogeneity assumptions but is no longer efficient when heteroskedasticity is present.

Causes and forms of heteroskedasticity include model misspecification, omitted variables, or data that exhibit changing dispersion

Detection methods range from visual inspection of residual plots to formal tests. The Breusch-Pagan test and

Remedies aim to restore valid inference. Robust standard errors, also known as heteroskedasticity-consistent covariance estimators (HC0–HC3,

across
levels
of
the
independent
variable
or
groups
of
observations.
Common
patterns
include
increasing
or
decreasing
spread
of
the
residuals
with
fitted
values,
or
different
variances
across
subpopulations.
Time
series
data
can
show
conditional
heteroskedasticity,
where
volatility
changes
over
time.
the
White
test
assess
whether
the
variance
of
the
residuals
is
linked
to
the
regressors
or
to
missing
nonlinearities.
The
Goldfeld-Quandt
test
evaluates
variance
changes
across
ordered
samples.
In
time
series,
tests
related
to
autoregressive
conditional
heteroskedasticity
(ARCH)
are
used
to
detect
changing
variance
driven
by
past
errors.
White),
provide
valid
standard
errors
without
changing
coefficient
estimates.
Transformations
of
the
dependent
variable
(e.g.,
logarithmic,
Box-Cox)
or
weighted
least
squares
(when
the
form
of
heteroskedasticity
is
known)
are
alternative
approaches.
In
practice,
addressing
heteroskedasticity
often
involves
model
refinement
or
specification
of
a
more
flexible
error
structure
to
achieve
homoscedasticity.