nontrainable
Nontrainable describes model parameters, components, or operations that do not participate in gradient-based updates during training. In machine learning, most parameters are trainable, meaning they are adjusted to minimize a loss function. Nontrainable elements remain fixed after initialization or selective freezing, though they may still influence the forward pass and predictions.
In neural networks, it is common to freeze certain layers or submodules to preserve pre-learned representations.
Implementation is framework-specific but generally involves preventing gradients from flowing to the nontrainable parameters. In PyTorch,
Benefits of nontrainable components include reduced computational cost and memory usage during training, simpler optimization, and
See also: transfer learning, fine-tuning, frozen layers, trainable parameters, gradient flow.