Home

GoldfeldQuandt

GoldfeldQuandt refers to the Goldfeld-Quandt test, a statistical method used to detect heteroskedasticity in linear regression models. Named after its developers, the test is particularly suited for situations in which the variance of the error term is believed to change in a systematic way with the ordering of observations, such as with an increasing level of an explanatory variable.

In practice, observations are first ordered by a presumed ordering variable. A central block of g observations

Variants of the test differ in how the central block size g is chosen and how the

is
then
omitted
to
avoid
the
region
where
variance
is
most
likely
to
change.
The
remaining
observations
are
split
into
two
end
groups.
The
variances
of
the
regression
residuals
within
these
two
groups,
s1^2
and
s2^2,
are
computed.
The
test
statistic
is
the
ratio
of
these
variances
(typically
the
larger
variance
divided
by
the
smaller
one),
which
under
the
null
hypothesis
of
homoskedasticity
follows
an
F
distribution
with
degrees
of
freedom
corresponding
to
the
sizes
of
the
two
groups
(one
less
than
each
group’s
size).
If
the
observed
ratio
exceeds
the
critical
value
at
a
chosen
significance
level,
the
null
hypothesis
of
constant
error
variance
is
rejected,
suggesting
heteroskedasticity.
two
end
groups
are
formed.
The
Goldfeld-Quandt
test
is
one
of
several
econometric
tools
for
assessing
heteroskedasticity,
alongside
tests
such
as
Breusch-Pagan
and
White.
The
method
is
especially
common
in
cross-sectional
and
time-series
regression
settings
where
variance
may
systematically
shift
with
an
ordering
variable.