This paper presented a number of possibilities for ``neural'' redundancy reduction based on Shannon's concept of information . Many potential sources of redundant information have been neglected, however (see, e.g., ). Among the things I did not address is the possibility of redundancy among learning strategies . Suppose learning algorithm A is good at solving problems of class but tends to fail with problems of class . Suppose learning algorithm B is good at solving problems of class but tends to fail with problems of class . But perhaps there is a short algorithm that takes learning algorithm A as an input and outputs learning algorithm B. This implies that there is ``algorithmic redundancy'' between A and B. Variants of algorithmic redundancy allow for things like ``learning by analogy'', ``learning by chunking'', ``learning how to learn'', etc. . Shannon information, however, is not the right concept to exploit the potential benefits of algorithmic redundancy. Instead we need to look at Kolmogorov complexity or ``algorithmic information''      and especially at its computationally tractable generalizations (e.g.    ) to properly treat general (as opposed to conventional statistical) sources of redundant information. This is a recent focus of my research [25,26,32,31].