Home

dprime

dprime, often written d', is a sensitivity measure in signal detection theory that quantifies an observer's ability to distinguish signal from noise. In the standard model, the internal responses to noise and to signal plus noise are assumed to be normally distributed with equal variance. d' represents the separation of these distributions in units of the shared standard deviation and is equivalently computed as the difference between the z-scores of the hit rate and the false alarm rate: d' = z(H) − z(F).

Calculation typically uses data from a yes/no detection task. H is the proportion of signal trials correctly

Interpretation: d' measures discriminability independent of response bias. A larger d' signals greater sensitivity to detect

Extensions and notes: If the equal-variance assumption is violated, alternative methods or nonparametric measures (such as

identified
(hits),
and
F
is
the
proportion
of
noise
trials
incorrectly
identified
as
signal
(false
alarms).
To
avoid
infinite
z-values
when
H
or
F
is
0
or
1,
a
common
correction
is
applied
(for
example,
H
=
(hits
+
0.5)
/
(signal
trials
+
1)
and
F
=
(false
alarms
+
0.5)
/
(noise
trials
+
1));
then
z-scores
are
computed
from
these
adjusted
rates
and
their
difference
yields
d'.
the
signal.
While
d'
captures
perceptual
or
cognitive
sensitivity,
the
decision
criterion
or
bias—how
willing
the
observer
is
to
say
"signal"—is
separate.
A
common
bias
index
is
c
=
−0.5
[z(H)
+
z(F)],
where
positive
values
indicate
a
conservative
bias
and
negative
values
indicate
a
liberal
bias.
A')
can
be
used.
d'
is
widely
applied
in
psychology,
neuroscience,
and
related
fields
to
compare
performance
across
conditions,
tasks,
or
groups.
It
is
unitless
and
reflects
the
separation
between
the
underlying
internal
distributions.