Home

classificatieaccuracy

Classification accuracy is a metric that measures the proportion of correct predictions made by a classifier on a data set. It is computed as the number of correctly labeled instances divided by the total number of instances. In multiclass problems, the overall accuracy is the fraction of samples for which the predicted label matches the true label. This simple measure is widely used to provide a quick assessment of a model’s performance.

Computation and related concepts: Accuracy is typically estimated on a held-out test set or via cross-validation.

Usage and interpretation: Accuracy is intuitive and convenient for quick comparisons, but its interpretation depends on

For
a
given
split,
it
equals
the
mean
of
the
indicator
that
a
prediction
is
correct
across
all
samples.
The
confusion
matrix
provides
a
detailed
view,
from
which
per-class
accuracy,
macro-averaged
accuracy,
and
micro-averaged
accuracy
can
be
derived.
Accuracy
can
be
misleading
on
imbalanced
data,
where
it
may
be
driven
by
the
majority
class.
In
addition,
accuracy
does
not
reveal
the
confidence
of
predictions
or
the
calibration
of
predicted
probabilities,
so
it
is
often
complemented
by
metrics
such
as
precision,
recall,
F1-score,
ROC
AUC,
or
calibration
measures.
the
data
set
characteristics
and
the
task.
It
is
useful
to
report
alongside
other
metrics
and
the
confusion
matrix
to
identify
common
error
patterns.
When
presenting
results,
it
is
common
to
include
the
evaluation
procedure
(train-test
split
or
cross-validation),
as
well
as
any
confidence
intervals
to
convey
uncertainty.