next up previous
Next: About this document ... Up: Discovering Solutions with Low Previous: ACKNOWLEDGEMENTS

Bibliography

1
A. Allender.
Application of time-bounded Kolmogorov complexity in complexity theory.
In O. Watanabe, editor, Kolmogorov complexity and computational complexity, pages 6-22. EATCS Monographs on Theoretical Computer Science, Springer, 1992.

2
E. B. Baum and D. Haussler.
What size net gives valid generalization?
Neural Computation, 1(1):151-160, 1989.

3
C. H. Bennett.
Logical depth and physical complexity.
In The Universal Turing Machine: A Half Century Survey, volume 1, pages 227-258. Oxford University Press, Oxford and Kammerer & Unverzagt, Hamburg, 1988.

4
G.J. Chaitin.
On the length of programs for computing finite binary sequences: statistical considerations.
Journal of the ACM, 16:145-159, 1969.

5
G.J. Chaitin.
A theory of program size formally identical to information theory.
Journal of the ACM, 22:329-340, 1975.

6
P. Gács.
On the symmetry of algorithmic information.
Soviet Math. Dokl., 15:1477-1480, 1974.

7
B. Hassibi and D. G. Stork.
Second order derivatives for network pruning: Optimal brain surgeon.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 5, pages 164-171. San Mateo, CA: Morgan Kaufmann, 1993.

8
D. Haussler.
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework.
Artificial Intelligence, 36:177-221, 1988.

9
S. Heil.
Universelle Suche und inkrementelles Lernen, diploma thesis, 1995.
Fakultät für Informatik, Lehrstuhl Prof. Brauer, Technische Universität München.

10
G. E. Hinton and D. van Camp.
Keeping neural networks simple.
In Proceedings of the International Conference on Artificial Neural Networks, Amsterdam, pages 11-18. Springer, 1993.

11
A.N. Kolmogorov.
Three approaches to the quantitative definition of information.
Problems of Information Transmission, 1:1-11, 1965.

12
L. A. Levin.
Universal sequential search problems.
Problems of Information Transmission, 9(3):265-266, 1973.

13
L. A. Levin.
Laws of information (nongrowth) and aspects of the foundation of probability theory.
Problems of Information Transmission, 10(3):206-210, 1974.

14
L. A. Levin.
Randomness conservation inequalities: Information and independence in mathematical theories.
Information and Control, 61:15-37, 1984.

15
M. Li and P. M. B. Vitányi.
An Introduction to Kolmogorov Complexity and its Applications.
Springer, 1993.

16
W. Maass.
Perspectives of current research about the complexity of learning on neural nets.
In V. P. Roychowdhury, K. Y. Siu, and A. Orlitsky, editors, Theoretical Advances in Neural Computation and Learning. Kluwer Academic Publishers, 1994.

17
D. J. C. MacKay.
A practical Bayesian framework for backprop networks.
Neural Computation, 4:448-472, 1992.

18
S. J. Nowlan and G. E. Hinton.
Simplifying neural networks by soft weight sharing.
Neural Computation, 4:173-193, 1992.

19
W. Paul and R. J. Solomonoff.
Autonomous theory building systems, 1991.
Manuscript, revised 1994.

20
J. Rissanen.
Modeling by shortest data description.
Automatica, 14:465-471, 1978.

21
J. Schmidhuber.
Discovering problem solutions with low Kolmogorov complexity and high generalization capability.
Technical Report FKI-194-94, Fakultät für Informatik, Technische Universität München, 1994.
Short version in A. Prieditis and S. Russell, eds., Machine Learning: Proceedings of the Twelfth International Conference, Morgan Kaufmann Publishers, pages 488-496, San Francisco, CA, 1995.

22
J. Schmidhuber.
Low-complexity art.
Technical Report FKI-197-94, Fakultät für Informatik, Technische Universität München, 1994.

23
J. Schmidhuber.
On learning how to learn learning strategies.
Technical Report FKI-198-94, Fakultät für Informatik, Technische Universität München, November 1994.
Revised January 1995.

24
R.J. Solomonoff.
A formal theory of inductive inference. Part I.
Information and Control, 7:1-22, 1964.

25
R.J. Solomonoff.
An application of algorithmic probability to problems in artificial intelligence.
In L. N. Kanal and J. F. Lemmer, editors, Uncertainty in Artificial Intelligence, pages 473-491. Elsevier Science Publishers, 1986.

26
P. Utgoff.
Shift of bias for inductive concept learning.
In R. Michalski, J. Carbonell, and T. Mitchell, editors, Machine Learning, volume 2, pages 163-190. Morgan Kaufmann, Los Altos, CA, 1986.

27
V. Vapnik.
Principles of risk minimization for learning theory.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4, pages 831-838. San Mateo, CA: Morgan Kaufmann, 1992.

28
C. S. Wallace and D. M. Boulton.
An information theoretic measure for classification.
Computer Journal, 11(2):185-194, 1968.



Juergen Schmidhuber 2003-02-25


Back to Optimal Universal Search page
Back to Program Evolution page
Back to Algorithmic Information page
Back to Speed Prior page