Home

heteroskedasticityrobust

Heteroskedasticity‑robust refers to statistical methods that remain valid when the assumption of constant variance (homoskedasticity) of the error terms in a regression model is violated. Classical ordinary least squares (OLS) estimates of coefficients are unbiased even under heteroskedasticity, but the usual variance estimates and t‑statistic calculations become inconsistent. This leads to unreliable hypothesis tests and confidence intervals. Heteroskedasticity‑robust techniques adjust the estimated covariance matrix of the parameter vector to account for non‑constant variance, allowing correct inference while retaining the OLS coefficient estimates.

The most common implementation is the “sandwich” estimator introduced by White. It weights the contribution of

Beyond linear regression, heteroskedasticity‑robust methods extend to generalized linear models, time‑series models, and panel data. In

Researchers routinely report robust standard errors when heteroskedasticity is suspected or detected via diagnostic tests such

each
observation
by
its
squared
residual,
producing
a
consistent
estimate
of
the
asymptotic
covariance
matrix.
In
linear
regression,
this
yields
the
robust
standard
errors
often
reported
as
the
“White”
or
“HC0”
errors.
Variants
such
as
HC1,
HC2,
HC3,
and
HC4
apply
small‑sample
corrections
to
improve
performance
when
the
sample
size
is
modest
or
the
design
matrix
is
poorly
conditioned.
In
many
statistical
software
packages
robust
errors
are
available
via
built‑in
commands
or
flags,
for
example
in
R
(sandwich,
lmrob),
Stata
(robust),
and
Python
(statsmodels).
GARCH
or
autoregressive
conditional
heteroskedasticity
models,
the
heteroskedasticity
is
modeled
explicitly,
but
robust
inference
can
still
be
applied
to
parameters
not
captured
by
the
volatility
specification.
In
panel
data,
cluster‑robust
standard
errors
accommodate
heteroskedasticity
within
clusters
while
allowing
for
arbitrary
within‑cluster
correlation.
as
Breusch‑Pagan,
White’s
test,
or
the
Goldfeld‑Quandt
test.
While
robust
errors
mitigate
certain
inference
problems,
they
do
not
correct
for
omitted
variables
or
endogeneity.
Therefore,
robust
inference
is
often
accompanied
by
other
methodological
safeguards,
such
as
instrument
variables
or
fixed‑effects
controls,
to
address
broader
econometric
concerns.