Home

DoGsigma1sigma2I

DoGsigma1sigma2I refers to the Difference of Gaussians operation applied to an image I, using two Gaussian blur kernels with standard deviations sigma1 and sigma2. It is commonly used in image processing to highlight features at a specific range of spatial scales and to serve as a simple, efficient edge and blob detector.

Mathematically, DoG_sigma1_sigma2(I) is defined as the difference between two Gaussian-blurred versions of I: DoG_sigma1_sigma2(I) = G_sigma1 * I

DoG is an approximation to the Laplacian of Gaussian (LoG). By choosing sigma2 relative to sigma1 (commonly

Applications include blob and edge detection, feature detection in computer vision, and as a component in scale-space

−
G_sigma2
*
I,
where
*
denotes
convolution
and
G_sigma
is
the
Gaussian
kernel
with
standard
deviation
sigma.
For
two-dimensional
images,
this
is
typically
computed
by
applying
Gaussian
blur
in
both
directions.
Practically,
the
Gaussian
blur
can
be
implemented
efficiently
as
a
separable
2D
filter.
with
sigma2
≈
k
*
sigma1,
where
k
~
1.5–2,
and
a
common
value
around
1.6
in
scale-space
methods),
the
DoG
emphasizes
structures
whose
sizes
fall
between
the
two
scales.
This
makes
it
useful
for
multi-scale
feature
detection
and
noise
suppression.
pipelines
such
as
the
DoG
pyramid
used
in
keypoint
detectors
like
SIFT.
The
method
is
simple,
fast,
and
widely
used
due
to
its
effectiveness
across
a
range
of
imaging
contexts.
Related
concepts
include
Gaussian
blur,
LoG,
and
scale-space
representations.