Home

l0norm

l0norm, written as ||x||_0, is defined for a vector x in R^n as the number of its nonzero components. It is commonly called the zero-norm, but it is not a true norm: it is not homogeneous (||αx||_0 = ||x||_0 for any nonzero α) and it is not convex; it is also discontinuous at points where components cross zero. Some authors extend the idea to matrices by counting nonzero entries.

In signal processing and machine learning, ||x||_0 serves as a sparsity measure and as a hard penalty

The l0-norm is central to sparse representation and compressed sensing. Under suitable conditions on the sensing

in
optimization.
A
typical
objective
is
min
||Ax
-
b||_2^2
subject
to
||x||_0
≤
k,
or
min
||Ax
-
b||_2^2
+
λ||x||_0.
Since
l0
minimization
is
combinatorial
and
NP-hard,
practical
methods
rely
on
relaxations
or
approximations:
convex
relaxation
replaces
||x||_0
with
||x||_1
(basis
pursuit,
Lasso),
nonconvex
penalties,
iterative
hard
thresholding,
and
greedy
algorithms
like
matching
pursuit
or
orthogonal
matching
pursuit.
matrix
A
(for
example,
restricted
isometry
properties)
and
sufficient
sparsity,
accurate
recovery
is
possible
with
certain
algorithms,
even
in
the
presence
of
noise.
It
underpins
feature
selection,
dictionary
learning,
and
sparse
coding
by
providing
a
formal
measure
of
sparsity
and
guiding
sparsity-promoting
techniques.