Home

L1L2

L1L2 refers to a family of regularization penalties that combine L1 and L2 norms in optimization problems. It is used to prevent overfitting and to encourage sparse solutions while stabilizing coefficient estimates.

The standard form: given a parameter vector w in R^p, the L1L2 penalty is P(w) = lambda1 ||w||_1

In statistics and machine learning, the term elastic net refers to a regularization that uses a weighted

Optimization: The objective remains convex when lambda1, lambda2 >= 0, enabling efficient optimization with methods such as

Applications: L1L2 regularization is used in linear and logistic regression, generalized linear models, and neural networks

See also: L1 norm, L2 norm, elastic net, regularization, proximal methods.

+
lambda2
||w||_2^2.
This
combination
promotes
sparsity
due
to
the
L1
term,
while
the
L2
term
encourages
smaller
overall
magnitude
and
shared
shrinkage
across
correlated
features.
sum
of
the
L1
norm
and
the
L2^2
norm;
some
authors
simply
call
this
L1L2
regularization.
The
two
formulations
yield
similar
effects;
the
L2
component
helps
with
correlated
features
by
grouping
them.
coordinate
descent,
proximal
gradient,
or
accelerated
proximal
gradient
(FISTA).
The
L1
term
introduces
non-differentiability,
handled
via
soft-thresholding.
to
promote
sparsity
and
reduce
overfitting,
especially
in
high-dimensional
data
where
p
is
large
relative
to
n.