L1L2
L1L2 refers to a family of regularization penalties that combine L1 and L2 norms in optimization problems. It is used to prevent overfitting and to encourage sparse solutions while stabilizing coefficient estimates.
The standard form: given a parameter vector w in R^p, the L1L2 penalty is P(w) = lambda1 ||w||_1
In statistics and machine learning, the term elastic net refers to a regularization that uses a weighted
Optimization: The objective remains convex when lambda1, lambda2 >= 0, enabling efficient optimization with methods such as
Applications: L1L2 regularization is used in linear and logistic regression, generalized linear models, and neural networks
See also: L1 norm, L2 norm, elastic net, regularization, proximal methods.