RTRL
Real-Time Recurrent Learning (RTRL) is an online gradient-based learning algorithm for training recurrent neural networks. It computes the gradient of the instantaneous loss with respect to all network weights in real time, allowing weight updates at each time step without backpropagation through time (BPTT).
How it works in brief: For a recurrent network with hidden state h_t that depends on the
Complexity and limitations: RTRL requires maintaining and updating a large set of derivatives, which for fully
Relation to other methods: RTRL is the online counterpart to BPTT, which unrolls the network through time
History and usage: RTRL was introduced in the late 1980s by Williams and Zipser and has contributed