Home

onevsrest

One-vs-rest (OvR), also known as one-vs-all (OvA), is a strategy for multiclass classification that reduces the problem to multiple binary classifications. For a problem with k classes, OvR trains k independent binary classifiers. Each classifier learns to distinguish one class from the rest. At prediction time, every classifier provides a score or probability for its class, and the final predicted class is the one with the highest score.

Most commonly, the base binary classifier is logistic regression or a linear support vector machine, but any

Advantages of OvR include simplicity and scalability to large numbers of classes, as well as the ability

Limitations include the possibility that the resulting decision boundaries are not globally optimal because they ignore

Related approaches include one-vs-one (OvO), which trains a binary classifier for every pair of classes, and

binary
classifier
can
be
used.
In
practice,
many
machine
learning
libraries
provide
a
wrapper
that
implements
OvR
by
training
one
model
per
class,
often
with
shared
data
and
features.
to
reuse
well-understood
binary
learners
and
to
calibrate
probabilities
per
class.
It
can
also
be
robust
when
some
classes
are
easy
to
separate.
interactions
among
classes
during
training.
Performance
can
suffer
if
classes
are
highly
imbalanced
or
if
probability
estimates
are
poorly
calibrated.
Train
time
grows
linearly
with
the
number
of
classes,
since
k
binary
models
are
built.
In
contrast
to
one-vs-one,
OvR
typically
requires
fewer
models
but
can
require
careful
thresholding.
error-correcting
output
codes
(ECOC),
which
use
a
designed
coding
scheme
to
combine
multiple
binary
classifiers.
OvR
is
commonly
used
as
a
baseline
in
multiclass
tasks
such
as
text
classification
and
image
recognition,
and
is
implemented
in
many
libraries
under
the
OneVsRestClassifier
or
OvR
wrappers.