Next: Optimal Ordered Problem Solver Up: Bias-Optimal Incremental Problem Solving Previous: Bias-Optimal Incremental Problem Solving

# Brief Introduction to Optimal Universal Search

Consider an asymptotically optimal method for tasks with quickly verifiable solutions:

Method 1.1 (LSEARCH)   View the -th binary string as a potential program for a universal Turing machine. Given some problem, for all do: every steps on average execute (if possible) one instruction of the -th program candidate, until one of the programs has computed a solution.

Given some problem class, if some unknown optimal program requires steps to solve a problem instance of size , and happens to be the -th program in the alphabetical list, then LSEARCH (for Levin Search) [6] will need at most steps -- the constant factor may be huge but does not depend on . Compare [11,7,3].

Recently Hutter developed a more complex asymptotically optimal search algorithm for all well-defined problems [3]. HSEARCH (for Hutter Search) cleverly allocates part of the total search time for searching the space of proofs to find provably correct candidate programs with provable upper runtime bounds, and at any given time focuses resources on those programs with the currently best proven time bounds. Unexpectedly, HSEARCH manages to reduce the constant slowdown factor to a value of , where is an arbitrary positive constant. Unfortunately, however, the search in proof space introduces an unknown additive problem class-specific constant slowdown, which again may be huge.

In the real world, constants do matter. In this paper we will use basic concepts of optimal search to construct an optimal incremental problem solver that at any given time may exploit experience collected in previous searches for solutions to earlier tasks, to minimize the constants ignored by nonincremental HSEARCH and LSEARCH.

Next: Optimal Ordered Problem Solver Up: Bias-Optimal Incremental Problem Solving Previous: Bias-Optimal Incremental Problem Solving
Juergen Schmidhuber 2003-02-25