Home

minmaksnormalisering

Minmaksnormalisering (min-max normalization) is a data preprocessing technique used to rescale numerical features to a fixed range, typically [0, 1]. It preserves the relative ordering of values and the shape of the distribution while constraining the scale of each feature, which can improve the performance of many machine learning algorithms.

The standard formula for a value x in a feature with minimum xmin and maximum xmax is

Applications of minmaksnormalisering include algorithms that rely on distance measurements or gradient-based optimization, such as k-nearest

Advantages of minmaksnormalisering include its simplicity, the preservation of relationships among values, and the production of

Variants and considerations: scaling to [-1, 1] can be achieved with a small modification to the formula.

x'
=
(x
−
xmin)
/
(xmax
−
xmin).
If
xmin
equals
xmax,
the
feature
has
no
variation
and
x'
is
commonly
set
to
0
(or
another
constant
such
as
0.5,
depending
on
convention).
In
practice,
extrema
are
computed
from
the
training
data
and
the
same
transformation
is
applied
to
all
subsequent
data,
including
test
data.
neighbors,
support
vector
machines,
neural
networks,
and
many
regression
models.
It
is
also
used
to
standardize
features
before
principal
component
analysis
or
other
dimensionality
reduction
techniques.
uniformly
scaled
features
which
can
improve
convergence
in
some
learning
algorithms.
Disadvantages
include
sensitivity
to
outliers,
which
can
skew
the
min
and
max
and
compress
the
majority
of
the
data
into
a
small
interval.
New
data
outside
the
original
min–max
range
can
lead
to
values
outside
the
target
interval,
unless
clipping
is
applied.
For
datasets
with
significant
outliers,
robust
scaling
or
standardization
(z-score)
may
be
preferred.
In
practice,
ensure
consistent
scaling
across
training,
validation,
and
test
sets
and
address
constant
features
that
yield
zero
division.