Regularisierungsmethode
Regularization, in statistics and machine learning, refers to techniques that constrain a model's complexity to improve generalization on unseen data. This is typically achieved by adding a penalty term to the loss function or by imposing constraints on model parameters. The penalty discourages large coefficients, reducing variance at the cost of some bias, which can improve predictive performance when data are noisy or when there are many features.
Common regularization methods include L1 regularization (Lasso), which adds the sum of absolute values of the
Historically, regularization has roots in Tikhonov regularization in the 1960s and ridge regression introduced by Hoerl
Practically, selecting the regularization strength is typically done by cross-validation. Data scaling matters because penalties depend