This paper presented a number of possibilities
for ``neural'' redundancy reduction based on Shannon's concept
of information [33].
Many potential sources of redundant information have been neglected,
however (see, e.g., [6]).
Among the things I did not address is
the possibility of redundancy among
learning strategies [31].
Suppose learning algorithm A is good at solving problems
of class
but tends to fail with problems
of class
.
Suppose learning algorithm B is good at solving problems
of class
but tends to fail with problems
of class
.
But perhaps there is a short algorithm that takes
learning algorithm A as an input and outputs
learning algorithm B. This implies that there is
``algorithmic redundancy'' between A and B.
Variants of algorithmic redundancy allow for
things like ``learning by analogy'', ``learning by chunking'',
``learning how to learn'', etc. [31].
Shannon information, however, is not the right concept
to exploit the potential benefits of
algorithmic redundancy. Instead we need to
look at Kolmogorov complexity or
``algorithmic information''
[8]
[34]
[2]
[11]
[3]
and especially at its computationally tractable generalizations
(e.g.
[4]
[10]
[12]
[35]) to properly treat
general (as opposed to conventional statistical)
sources of redundant information.
This is a recent focus of my research
[25,26,32,31].