Home

largesample

Large-sample theory, in statistics, concerns the behavior of estimators and test statistics as the sample size n grows without bound. It provides asymptotic approximations that justify inference when exact finite-sample results are unavailable or intractable.

Two foundational results are central to large-sample reasoning: the law of large numbers, which states that

Under regularity conditions, estimators such as maximum likelihood estimators are consistent and asymptotically normal, with variance

Practical use of large-sample theory includes guiding study design and the interpretation of results when data

See also: asymptotic theory, law of large numbers, central limit theorem, maximum likelihood estimation, delta method,

sample
means
converge
to
the
population
mean
as
n
tends
to
infinity,
and
the
central
limit
theorem,
which
implies
that
suitably
normalized
sums
converge
in
distribution
to
a
normal
distribution.
These
results
underpin
approximate
confidence
intervals
and
hypothesis
tests
for
large
samples.
given
by
the
inverse
Fisher
information.
The
delta
method
extends
these
results
to
smooth
functions
of
parameters.
Hypothesis
tests
and
confidence
intervals
often
rely
on
Wald,
score,
or
likelihood
ratio
statistics
that
have
approximate
chi-square
or
normal
distributions
in
large
samples.
sets
are
large.
It
contrasts
with
finite-sample
methods
that
provide
exact
results
for
small
samples.
When
regularity
fails
or
sample
sizes
are
moderate,
resampling
and
simulation,
such
as
bootstrap
methods,
offer
alternatives
to
rely
on
asymptotic
approximations.
bootstrap.