Home

ROCcurves

ROC curves, or receiver operating characteristic curves, are graphical representations of the performance of a binary classifier as its discrimination threshold is varied. The curve plots the true positive rate (sensitivity) against the false positive rate (1 − specificity) at each threshold. A classifier that assigns higher scores to positive cases will have a curve closer to the top-left corner.

The area under the curve (AUC) summarizes the curve into a single number. An AUC of 0.5

ROC curves are widely used to compare binary classifiers across domains such as medicine, finance, and machine

Limitations and considerations: ROC performance may be misleading if costs of false positives and false negatives

indicates
random
performance,
while
an
AUC
of
1
represents
perfect
discrimination.
The
AUC
can
be
interpreted
as
the
probability
that
a
randomly
chosen
positive
instance
receives
a
higher
score
than
a
randomly
chosen
negative
one.
Computation
is
typically
nonparametric
via
the
trapezoidal
rule;
ties
are
handled
by
standard
methods
in
the
software.
learning.
They
are
insensitive
to
class
distribution
to
some
extent,
and
allow
threshold-free
comparison;
however,
they
do
not
reflect
calibration,
i.e.,
whether
predicted
probabilities
match
actual
frequencies.
The
curves
can
be
extended
to
multi-class
problems
(one-vs-rest)
and
to
macro
or
micro
averaging.
Alternative
evaluation
curves,
such
as
precision-recall
curves,
may
be
preferred
when
the
positive
class
is
rare.
differ
greatly;
the
curve
treats
all
thresholds
equally.
In
practice,
both
discrimination
and
calibration
should
be
assessed,
using
additional
metrics
and
calibration
plots.
Bootstrap
methods
are
often
used
to
estimate
confidence
intervals
for
the
AUC.