regularisaatiotermejä
Regularisaatiotermejä are additional components added to the loss function in machine learning models. Their primary purpose is to prevent overfitting, a phenomenon where a model learns the training data too well, including its noise and outliers, leading to poor performance on unseen data. By penalizing complex models, regularisation terms encourage simpler solutions that generalize better.
The most common types of regularisation terms are L1 and L2 regularisation. L2 regularisation, also known as
The strength of the regularisation is controlled by a hyperparameter, often denoted by lambda (λ). A higher
Other forms of regularisation exist, such as elastic net regularisation, which combines both L1 and L2 penalties.