L2regularisointia
L2 regularization, also known as ridge regression, is a technique used in machine learning to prevent overfitting by adding a penalty to the loss function. This penalty is proportional to the square of the magnitude of the coefficients, which encourages the model to keep the coefficients small. The regularization term is typically added to the ordinary least squares loss function, resulting in a modified loss function that the model aims to minimize. The regularization parameter, often denoted by lambda (λ), controls the strength of the penalty. A larger value of λ increases the penalty, leading to smaller coefficients and potentially underfitting, while a smaller value of λ reduces the penalty, allowing the model to fit the training data more closely but risking overfitting. L2 regularization is particularly useful in linear regression, logistic regression, and other linear models. It helps to stabilize the model and improve its generalization to unseen data.