next up previous
Next: Unfolding in time Up: The Architecture and the Previous: The Architecture and the

On-Line Versus Off-Line Learning

The off-line version of the algorithm would wait for the end of an episode to compute the final change of $W_S$ as the sum of all changes computed at each time step. The on-line version changes $W_S$ at every time step, assuming that $\eta$ is small enough to avoid instabilities [Williams and Zipser, 1989]. An interesting property of the on-line version is that we do not have to specify episode boundaries (`all episodes blend into each other' [Williams and Zipser, 1989]).

Juergen Schmidhuber 2003-02-13

Back to Recurrent Neural Networks page