L2säännöllistämistä
L2säännöllistämistä, often referred to as L2 regularization or weight decay, is a technique used in machine learning to prevent overfitting in models. It achieves this by adding a penalty term to the model's loss function that is proportional to the square of the magnitude of the model's weights. This penalty discourages the model from learning overly complex relationships that might be specific to the training data but do not generalize well to unseen data.
The mathematical formulation of L2 regularization involves adding a term of the form $\lambda \sum_{i=1}^n w_i^2$
By pushing weights towards zero, L2 regularization effectively smooths the learned function, making the model less