Regularisaation
Regularization is a technique used in statistics and machine learning to prevent overfitting. Overfitting occurs when a model learns the training data too well, including its noise and specific patterns, leading to poor performance on new, unseen data. Regularization works by adding a penalty term to the model's objective function, which discourages overly complex models. This penalty is typically a function of the model's parameters (coefficients or weights).
The core idea is to reduce the model's variance at the expense of a slight increase in
Common types of regularization include L1 regularization (Lasso) and L2 regularization (Ridge). L1 regularization adds a