Home

AUCROC

AUC-ROC, short for area under the receiver operating characteristic curve, is a widely used performance metric for binary classification. It summarizes the model’s ability to discriminate between the positive and negative classes across all possible decision thresholds. The ROC curve itself plots the true positive rate against the false positive rate as the threshold varies.

The area under this curve (AUROC) has a probabilistic interpretation: it equals the chance that a randomly

AUROC is threshold-invariant, meaning it evaluates ranking quality rather than the exact predicted probabilities at a

Extensions and related measures include time-dependent AUROC for survival analysis, and multi-class adaptations that compute per-class

chosen
positive
instance
will
receive
a
higher
score
from
the
model
than
a
randomly
chosen
negative
instance.
An
AUROC
of
0.5
indicates
no
discriminative
ability
beyond
random
guessing,
while
1.0
represents
perfect
separation.
Values
between
0.7
and
0.9
are
generally
considered
good
to
excellent,
though
interpretation
depends
on
context.
fixed
threshold.
It
can
be
robust
to
class
imbalance,
but
it
does
not
reflect
calibration,
i.e.,
whether
predicted
probabilities
match
observed
frequencies.
In
settings
with
highly
skewed
costs
for
misclassifications
or
where
the
emphasis
is
on
a
specific
range
of
false
positive
rates,
partial
AUROC
or
alternative
metrics
like
the
area
under
the
precision-recall
curve
(AUPRC)
may
be
more
informative.
AUROCs
with
averaging.
Computation
typically
uses
the
trapezoidal
rule
under
the
ROC
curve,
and
many
machine
learning
libraries
provide
built-in
AUROC
evaluation.
AUROC
remains
a
standard
benchmark
for
comparing
classifier
performance,
especially
when
the
priority
is
ranked
discrimination
rather
than
probability
calibration.