Regularisation
Regularisation, in statistics, machine learning, and numerical analysis, refers to techniques that introduce additional information to constrain a model or ill-posed problem. The goal is to prevent overfitting, improve generalization to unseen data, and stabilize computations by penalizing extreme parameter values or overly complex solutions.
In supervised learning, regularisation is commonly implemented by adding a penalty term to the loss function.
Bayesian interpretation: regularisation can be viewed as imposing a prior distribution on model parameters; a Gaussian
In inverse problems and numerical analysis, regularisation (such as Tikhonov regularization) stabilizes solutions to ill-conditioned problems
Trade-offs: regularisation reduces variance and improves generalization but introduces bias and can complicate interpretation. It aids