PReLU
PReLU, or parametric rectified linear unit, is an activation function used in neural networks. It generalizes the standard rectified linear unit by allowing a small, learnable slope for negative inputs. The function is defined as f(x) = x if x is nonnegative, and f(x) = a x if x is negative, where a is a parameter learned during training. The derivative with respect to x is 1 for nonnegative inputs and a for negative inputs, while the derivative with respect to a is x for negative inputs and 0 otherwise.
Variants of PReLU differ in how the parameter a is shared. A common approach uses a separate
PReLU was introduced to address limitations of ReLU, notably the dying ReLU problem and potential slow convergence
In practice, PReLU is supported in many deep learning frameworks and can be applied to convolutional or