gradL1
gradL1 is a regularization technique used in machine learning, particularly in the context of optimization problems. It is a variant of L1 regularization, which is known for promoting sparsity in model parameters, meaning it encourages many of them to become exactly zero. The "grad" in gradL1 likely refers to its relationship with the gradient of the loss function.
While standard L1 regularization adds the sum of the absolute values of the parameters to the loss
The mathematical form of gradL1 is not universally standardized and can vary across different research papers