Home

Scalespace

Scalespace, or scale-space, is a framework for representing signals and images across a continuum of spatial or temporal scales. In image processing, it yields a family of representations {L(x, y; t)} obtained by smoothing the input image with Gaussian kernels at increasing scales. This produces progressively coarser versions of the image that preserve essential structures while removing finer details.

Mathematically, the smoothing at scale t is expressed as L(x, y; t) = G(x, y; t) * f(x, y),

A key property of scale-space representations is that increasing the scale tends to remove fine details without

Applications include multi-scale feature detection and blob localization (via Laplacian of Gaussian or Difference of Gaussians),

where
G
is
a
Gaussian
kernel
with
variance
t
and
f
is
the
original
image.
The
parameter
t
controls
the
degree
of
smoothing:
small
t
preserves
detail,
large
t
removes
fine
structure.
The
Gaussian
kernel
is
connected
to
the
heat
equation,
with
L
evolving
as
∂L/∂t
=
(1/2)∇^2L,
making
the
Gaussian
the
fundamental
solution.
The
smoothing
operation
is
associative,
so
L(t1
+
t2)
=
G(t1)
*
L(t2).
introducing
new
structures;
in
particular,
coarse
scales
do
not
create
new
local
extrema.
This
causality
underpins
many
multi-scale
analysis
techniques
and
justifies
using
derivatives
of
L
with
respect
to
space
and
scale
for
feature
detection.
edge
and
corner
detection,
texture
analysis,
and
motion
or
video
analysis
with
spatio-temporal
scale-space.
Extensions
cover
spatio-temporal
scale-space
for
video
and
the
broader
scale-space
axioms
developed
by
researchers
such
as
Koenderink
and
Lindeberg.