gradientregularization
Gradient regularization refers to a family of regularization techniques in machine learning that penalize large gradients of a model's predictions or loss with respect to its inputs. By discouraging rapid changes in response to small input perturbations, gradient regularization aims to produce smoother functions, improve generalization, and enhance robustness.
Two common formulations are used in supervised learning. The first penalizes the gradient of the loss with
Compared with conventional weight-based regularization (such as L2 on weights), gradient regularization targets the smoothness of
Applications span supervised classification, regression, and representation learning, with particular interest in settings where input perturbations