next up previous
Next: 2. THE `INTROSPECTIVE' NETWORK Up: selfref Previous: selfref

1. INTRODUCTION

In contrast to traditional machine learning systems, humans do not appear to rely on hard-wired learning algorithms only. Instead, they tend to reflect about their own learning behavior and modify it and tailor it to the needs of various types of learning problems. To a degree, humans are able to learn how to learn. The thought experiment in this paper is intended to make a step towards `self-referential' machine learning by showing the theoretical possibility of `self-referential' neural networks whose weight matrices can learn to implement and improve their own weight change algorithm, without any significant theoretical limits.

Structure of the paper. Section 2 starts with a general finite, `self-referential' architecture involving a sequence-processing recurrent neural-net (see e.g. Robinson and Fallside [2], Williams and Zipser [8], and Schmidhuber [3]) that can potentially implement any computable function that maps input sequences to output sequences -- the only limitations being unavoidable time and storage constraints imposed by the architecture's finiteness. These constraints can be extended by simply adding storage and/or allowing for more processing time. The major novel aspect of the system is its `self-referential' capability. The network is provided with special input units for explicitly observing performance evaluations (external error signals are visible through these special input units). In addition, it is provided with the basic tools for explicitly reading and quickly changing all of its own adaptive components (weights). This is achieved by (1) introducing an address for each connection of the network, (2) providing the network with output units for (sequentially) addressing all of its own connections (including those connections responsible for addressing connections) by means of time-varying activation patterns, (3) providing special input units whose activations become the weights of connections currently addressed by the network, and (4) providing special output units whose time-varying activations serve to quickly change the weights of connections addressed by the network. It is possible to show that these unconventional features allow the network (in principle) to compute any computable function mapping algorithm components (weights) and performance evaluations (e.g., error signals) to algorithm modifications (weight changes) - the only limitations again being unavoidable time and storage constraints. This implies that algorithms running on that architecture (in principle) can change not only themselves but also the way they change themselves, and the way they change the way they change themselves, etc., essentially without theoretical limits.

Connections are addressed, analyzed, and manipulated with the help of differentiable functions of activation patterns across special output units. This allows the derivation of an exact gradient-based initial weight change algorithm for `introspective' supervised sequence learning. The system starts out as tabula rasa. The initial weight change procedure serves to find improved weight change procedures - it favors algorithms (weight matrices) that make sensible use of the `introspective' potential of the hard-wired architecture, where `usefulness' is solely defined by conventional performance evaluations (the performance measure we use is the sum of all error signals over all time steps of all training sequences).

A disadvantage of the algorithm is its high computational complexity per time step which is independent of the sequence length and equals $O(n_{conn} log n_{conn})$, where $n_{conn}$ is the number of connections. Another disadvantage is the high number of local minima of the unusually complex error surface. The purpose of this paper, however, is not to come up with the most efficient `introspective' or `self-referential' weight change algorithm, but to show that such algorithms are possible at all.


next up previous
Next: 2. THE `INTROSPECTIVE' NETWORK Up: selfref Previous: selfref
Juergen Schmidhuber 2003-02-21


Back to Metalearning page
Back to Recurrent Neural Networks page