gradL2
gradL2 refers to a regularization technique commonly used in machine learning, particularly in the context of training neural networks. It is a variant of L2 regularization, which aims to prevent overfitting by penalizing large weights in a model. The "grad" prefix in gradL2 specifically indicates that the regularization term is applied to the gradients of the loss function with respect to the model's parameters.
In standard L2 regularization, a penalty proportional to the square of the weights is added to the
The mathematical formulation of gradL2 typically involves adding a term to the objective function that is