Home

autocovariances

Autocovariances are a set of quantities that quantify how a time series covaries with itself at different time lags. For a stochastic process {X_t} with mean mu, the autocovariance at lag h is defined as gamma(h) = Cov(X_t, X_{t+h}) = E[(X_t - mu)(X_{t+h} - mu)].

If the process is weakly stationary, gamma(h) does not depend on the time t and satisfies gamma(-h)

Estimating autocovariances from data typically uses a sample version. Given a sample x1, x2, ..., xn with

Special cases and properties illustrate their use. For white noise with variance sigma^2, gamma(h) = 0 for

=
gamma(h)
for
all
h,
with
gamma(0)
=
Var(X_t).
The
function
gamma(h)
captures
the
dependence
structure
of
the
series
over
time.
The
autocorrelation
function,
which
standardizes
autocovariances,
is
rho(h)
=
gamma(h)
/
gamma(0),
provided
gamma(0)
>
0.
sample
mean
x_bar,
the
sample
autocovariance
at
lag
h
is
often
written
as
gamma_hat(h)
=
(1/(n
-
h))
sum_{t=1}^{n-h}
(x_t
-
x_bar)(x_{t+h}
-
x_bar)
(conventions
vary,
with
some
using
n
in
the
denominator).
The
sample
autocovariance
gives
an
empirical
view
of
how
observations
separated
by
h
steps
co-vary.
h
≠
0.
For
an
AR(1)
process
X_t
=
phi
X_{t-1}
+
e_t
with
|phi|<1,
gamma(h)
=
phi^{|h|}
gamma(0).
Autocovariances
form
the
basis
of
many
analyses,
including
model
identification,
forecasting,
and
spectral
density
estimation,
where
the
spectral
density
is
the
Fourier
transform
of
gamma(h).