Home

errormeasure

Errormeasure is a quantitative metric used to describe the discrepancy between a measured, estimated, or predicted value and a reference value, typically the true value. In measurement theory, statistics, and data analysis, errormeasures are used to assess accuracy, precision, and the quality of models, sensors, or experiments. They can be defined for individual observations or aggregated over datasets, time series, or spatial domains.

Common types of error include absolute error and relative error. Absolute error is the magnitude of the

Aggregated error measures are widely used to evaluate models and predictions. Mean Absolute Error (MAE) is the

Practitioners select errormeasures based on the context, balancing interpretability, scale sensitivity, and robustness to outliers. In

difference
between
the
estimate
ŷ
and
the
true
value
y,
computed
as
e_abs
=
|ŷ
−
y|.
Relative
error
scales
this
difference
by
the
magnitude
of
the
true
value,
e_rel
=
|ŷ
−
y|
/
|y|
when
y
≠
0,
providing
a
unitless
measure.
Squared
error
e_sq
=
(ŷ
−
y)^2
is
also
used
to
emphasize
larger
discrepancies.
In
addition,
signed
error
e
=
ŷ
−
y
keeps
track
of
direction.
average
of
absolute
errors:
MAE
=
(1/n)
∑
|ŷ_i
−
y_i|.
Mean
Squared
Error
(MSE)
and
Root
Mean
Squared
Error
(RMSE)
are
the
average
of
squared
errors
and
its
square
root,
respectively.
Relative
error
measures
include
Mean
Absolute
Percentage
Error
(MAPE)
=
(100/n)
∑
|(ŷ_i
−
y_i)/y_i|,
which
expresses
errors
as
percentages,
though
it
can
be
undefined
when
y_i
=
0.
Symmetric
MAPE
(SMAPE)
provides
a
scale-invariant
alternative:
SMAPE
=
(100/n)
∑
[2|ŷ_i
−
y_i|
/
(|ŷ_i|
+
|y_i|)].
machine
learning,
many
error
measures
also
function
as
loss
functions
guiding
model
optimization.