Home

Ftests

F-tests are statistical procedures that use the F-distribution to assess hypotheses about population variances or linear models. They are most commonly employed in analysis of variance (ANOVA) and in linear regression to test whether a set of model terms contributes significantly to explaining variation in the data.

The F-statistic is a ratio of two mean squares. In ANOVA, F = MS_between / MS_within, where MS_between

Hypotheses: In regression, H0: a set of regression coefficients is zero; in ANOVA, H0: all group means

Calculation and interpretation: Given data, compute the relevant sums of squares, derive the mean squares, form

For model selection, F-tests compare nested models using the statistic F = (SS_full - SS_reduced)/df_diff divided by SS_full/df_full.

Assumptions and limitations: You need independent observations, normally distributed residuals, and equal variances across groups. Violations

reflects
variation
explained
by
group
means
and
MS_within
reflects
unexplained
variation.
In
regression,
F
=
MS_regression
/
MS_residual,
where
MS_regression
measures
explained
variance
by
the
predictors
and
MS_residual
is
the
unexplained
variance.
Under
the
null
hypothesis
and
standard
assumptions,
the
statistic
follows
an
F-distribution
with
df1
and
df2
equal
to
the
numerator
and
denominator
degrees
of
freedom.
are
equal.
F,
and
obtain
a
p-value
from
the
F-distribution.
A
small
p-value
leads
to
rejection
of
H0,
implying
that
the
model
or
the
group
means
explain
a
significant
portion
of
variation.
In
experimental
design,
they
test
main
effects
and
interactions.
can
undermine
the
test;
alternatives
include
Welch's
test
or
nonparametric
methods.