RReLU
RReLU, or Randomized Rectified Linear Unit, is a type of activation function used in artificial neural networks. It is a variation of the standard ReLU function, which outputs the input directly if it is positive and zero otherwise. RReLU introduces a random parameter within a specified range for negative inputs. Specifically, for positive inputs, RReLU behaves identically to ReLU, outputting the input value itself. However, for negative inputs, instead of outputting zero, RReLU multiplies the input by a small random coefficient, usually drawn from a uniform distribution between a predefined lower and upper bound. This coefficient is fixed for each neuron within a layer during training but can vary across different neurons and different training epochs. The purpose of this randomization is to prevent the "dying ReLU" problem, where neurons can become inactive and stop learning if their inputs are consistently negative. By introducing a small, non-zero gradient for negative inputs, RReLU allows these neurons to potentially recover and contribute to the learning process. The range for the random coefficient is typically a hyperparameter that needs to be tuned during the model development process.