next up previous
Next: Bibliography Up: A Local Learning Algorithm Previous: Simple Experiments

Conclusion

There is an analogy between the NBB and competitive learning [Grossberg, 1976][Kohonen, 1988][Rumelhart and Zipser, 1986]. Competitive learning also can be interpreted as a shifting of weight substance. However, here it is the weakly contributing incoming connections to a unit that have to pay to the strongly contributing incoming connections. In contrast, the NBB causes weight shifts from outgoing to incoming connections. This is the key feature used for relating present system states to past states. (Recently we have proposed another local learning scheme for recurrent networks where a relation between past and present states is established by a second adaptive network [Schmidhuber, 1990a] [Schmidhuber, 1990b].)

Due to the local nature of all computations, the discrete time version of the NBB can easily be implemented such that the time complexity of one update cycle (activation changes and weight changes) is $O(n)$ where $n$ is the number of weights in the system. For some particular connection all information needed at a given time is its current weight, its contribution during the current time step and its contribution during the last time step. For some particular unit all information needed at a given time is its current activation, the summed contributions it receives during the current time step, and the summed contributions it received during the last time step.

Short term memory can be identified in activations wandering around feedback loops. Such loops may even become stable: A competitive subset of units that is permanently referencing itself can lead to a local dynamic equilibrium of weight flow (and of activation flow running in the opposite direction). Such equilibria may get perturbed by new inputs from the environment or from other competitive subsets that do not participate in the loop.

One difference to Holland's bucket brigade algorithm is that there is no analogue to the creation of new classifiers at run time: The number of connections in an NBB system remains fixed. The justification for this is given by the fact that weights are modifiable, while the `specificity' of a classifier is not. (In [Compiani et al., 1989] Compiani, Montanari, Serra and Valastro consider more relationships between classifier systems and neural networks.)

We certainly do not want to suggest that the brain uses a weight shifting mechanism for e.g. physically transporting transmitter substance from synapses of outgoing connections to synapses of incoming connections. However, we do not want to exclude the possibility that some kind of local feedback mechanism exists whose effects on the synapses are similar to the effects caused by the NBB.

A major property of the brain seems to be that the motoric actions which it causes depend on local computations only. The major contribution of this paper is to propose at least one possibility for how completely local computations within a neural network-like system may lead to goal directed parallel/sequential behavior.

The NBB represents a general credit assignment scheme for neural network-like structures. `General' often seems to imply `weak'. How `weak' is the NBB? It remains to be seen whether the NBB can be successfully applied to difficult control tasks.


next up previous
Next: Bibliography Up: A Local Learning Algorithm Previous: Simple Experiments
Juergen Schmidhuber 2003-02-21


Back to Reinforcement Learning Economy page
Back to Recurrent Neural Networks page