Säännöllistämistä
Säännöllistämistä, often translated as regularization in English, is a technique used in statistics and machine learning to prevent overfitting. Overfitting occurs when a model learns the training data too well, including its noise and random fluctuations, leading to poor performance on new, unseen data. Regularization methods add a penalty term to the model's objective function, which discourages overly complex models.
The most common types of regularization are L1 (Lasso) and L2 (Ridge) regularization. L1 regularization adds
The strength of the regularization is controlled by a hyperparameter, often denoted by lambda (λ) or alpha