Home

hardmargin

Hard margin is a term used in machine learning, most often in the context of support vector machines (SVM). A hard-margin classifier seeks a linear decision boundary that strictly separates the training examples of the two classes, with no misclassifications. It relies on the assumption that the data are linearly separable with a positive margin and aims to maximize that margin between the classes.

In a hard-margin linear SVM, given training data (x_i, y_i) with y_i in {+1, -1}, the goal

Hard margins require that the data be perfectly linearly separable; otherwise, no feasible solution exists. The

Relation to soft margin: soft-margin SVMs allow misclassifications via slack variables and a regularization parameter C

is
to
find
a
weight
vector
w
and
bias
b
that
solve:
minimize
(1/2)
||w||^2
subject
to
y_i
(w^T
x_i
+
b)
≥
1
for
all
i.
The
decision
function
is
sign(w^T
x
+
b).
The
support
vectors
are
the
samples
that
lie
on
the
margin,
where
y_i
(w^T
x_i
+
b)
=
1.
approach
is
sensitive
to
outliers
and
depends
on
proper
feature
scaling,
since
any
misclassified
point
can
render
the
problem
infeasible.
In
practice,
hard-margin
SVMs
are
less
robust
to
noise
or
overlap
in
the
data.
that
trades
off
margin
size
against
classification
errors.
Hard
margin
is
the
limiting
case
as
C
→
∞.
Soft-margin
methods
are
generally
preferred
when
data
are
not
perfectly
separable
or
when
robustness
to
noise
is
needed,
while
hard-margin
concepts
remain
important
for
theoretical
analysis.
See
also
soft-margin
SVM,
support
vector
machine,
and
kernel
methods.