next up previous
Next: Formal Details of a Up: Gödel Machines: Self-Referential Universal Previous: Gödel Machines: Self-Referential Universal


Introduction: Basic Principles of Gödel Machines

In 1931 Kurt Gödel used elementary arithmetics to build a universal programming language for encoding arbitrary proofs, given an arbitrary enumerable set of axioms. He went on to construct self-referential formal statements that claim their own unprovability, using Cantor's diagonalization trick [5] to demonstrate that formal systems such as traditional mathematics are either flawed in a certain sense or contain unprovable but true statements [10].

Since Gödel's exhibition of the fundamental limits of proof and computation, and Konrad Zuse's subsequent construction of the first working programmable computer (1935-1941), there has been a lot of work on specialized algorithms solving problems taken from more or less general problem classes. Apparently, however, one remarkable fact has so far escaped the attention of computer scientists: it is possible to use self-referential proof systems to build optimally efficient yet conceptually very simple universal problem solvers.

Many traditional problems of computer science require just one problem-defining input at the beginning of the problem solving process. For example, the initial input may be a large integer, and the goal may be to factorize it. In what follows, however, we will also consider the more general case where the problem solution requires interaction with a dynamic, initially unknown environment that produces a continual stream of inputs and feedback signals, such as in autonomous robot control tasks, where the goal may be to maximize expected cumulative future reward [18] (examples in Section 3.2). This may require the solution of essentially arbitrary problems.

Neither Levin's universal search [22] nor its incremental extension, the Optimal Ordered Problem Solver [38,40], nor Solomonoff's recent ideas [48] are `universal enough' for such general setups, and our earlier self-modifying online learning systems [29,32,44,43,45] are not necessarily optimal. Hutter's recent AIXI model [15] does execute optimal actions in very general environments evolving according to arbitrary, unknown, yet computable probabilistic laws, but only under the unrealistic assumption of unlimited computation time. AIXI's asymptotically optimal, space/time-bounded cousin AIXI$(t,l)$ [15] may be the system conceptually closest to the one pesented here. In discrete cycle $k=1,2,3, \ldots$ of AIXI$(t,l)$'s lifetime, action $y(k)$ results in perception $x(k)$ and reward $r(k)$, where all quantities may depend on the complete history. Using a universal computer such as a Turing machine [50], AIXI$(t,l)$ needs an initial offline setup phase (prior to interaction with the environment) to examine all proofs of length at most $l_P$, filtering out those that identify programs (of maximal size $l$ and maximal runtime $t$ per cycle) which not only could interact with the environment but which for all possible interaction histories also correctly predict a lower bound of their own expected reward. In cycle $k$, AIXI$(t,l)$ then runs all programs identified in the setup phase (at most $2^l$), finds the one with highest self-rating, and executes its corresponding action. The problem-independent setup time (where almost all of the work is done) is $O(l_P 2^{l_P})$, and the online computation time per cycle is $O(t 2^l)$. AIXI$(t,l)$ is related to Hutter's `fastest' algorithm for all well-defined problems (HSEARCH [16]) which also uses a general proof searcher, and also is asymptotically optimal in a certain sense. Assume discrete input/output domains $X/Y$, a formal problem specification $f: X \rightarrow Y$ (say, a functional description of how integers are decomposed into their prime factors), and a particular $x \in X$ (say, an integer to be factorized). HSEARCH orders all proofs of an appropriate axiomatic system by size to find programs $q$ that for all $z \in X$ provably compute $f(z)$ within time bound $t_q(z)$. Simultaneously it spends most of its time on executing the $q$ with the best currently proven time bound $t_q(x)$. It turns out that HSEARCH is as fast as the fastest algorithm that provably computes $f(z)$ for all $z \in X$, save for a constant factor smaller than $1 + \epsilon$ (arbitrary $\epsilon > 0$) and an $f$-specific but $x$-independent additive constant [16]. That is, HSEARCH and AIXI$(t,l)$ boast an optimal order of complexity. This somewhat limited notion of optimality, however, can be misleading despite its wide use in theoretical computer science, as it hides the possibly huge but problem-independent constants which could make AIXI$(t,l)$ and HSEARCH practically infeasible.

Our novel Gödel machine1derives its name and its power from exploiting provably useful changes of any part of its own code in self-referential fashion. Its theoretical advantages over previous approaches can be traced back to the fact that its notion of optimality is less restricted and that it has no unmodifiable software at all. Its bootstrap mechanism is based on a simple idea. We provide it with an axiomatic description of (possibly stochastic) environmental properties and of its goals and means. The latter not only include some initial problem soving strategy but also a systematic proof searcher seeking an algorithm for modifying the current Gödel machine together with a formal proof that the execution of this algorithm will improve the Gödel machine, according to some utility function or optimality criterion represented as part of the goals. In particular, utility may take into account expected computational costs of proof searching and other actions, to be derived from the axioms. The self-improvement strategy is not `greedy': assuming consistency of the axiomatic system, some self-improvement's usefulness will be provable only if it can be shown that the current proof searcher will not find an even better self-improvement sufficiently quickly. In this sense any executed self-improvement will be globally optimal--it will be the best of all possible relevant self-improvements, relative to the given resource limitations and the initial proof search strategy.

Unlike AIXI$(t,l)$ and HSEARCH, the Gödel machine can improve the proof searcher itself. Unlike HSEARCH it does not waste time on finding programs that provably compute $f(z)$ for all $z \in X$ when the current $x \in X$ is the only object of interest. While the hardwired brute force theorem provers of AIXI$(t,l)$ and HSEARCH systematically search in raw proof space--they can hide the proof search cost (exponential in proof size) in their asymptotic optimality notation [16,15]--the initial, not yet self-improved Gödel machine already produces many proofs much faster as it searches among online proof techniques: proof-generating programs that may read the Gödel machine's current state. Hence, unlike AIXI$(t,l)$, the Gödel machine can profit from online proof search which may exploit information obtained through interaction with the environment.

This approach raises several unconventional issues concerning the connection between syntax and semantics though. Proofs are just symbol strings produced from other symbol strings according to certain syntactic rules. Such a symbol string, however, may be interpreted in online fashion as a statement about the computational costs of the program that computes it (a semantic issue), and may suggest a Gödel machine-modifying algorithm whose execution would be semantically useful right now as the proof is being created. The proof searcher must deal with the fact that the utility of certain self-modifications may depend on the remaining lifetime, and with the problem of producing the right proof at the right time.

Section 2 will formally describe details of a particular Gödel machine, focusing on its novel aspects, skipping over well-known standard issues treated by any proof theory textbook. Section 2.2.1 will introduce the essential instructions invoked by proof techniques to compute axioms and theorems and to relate syntax to semantics. Section 2.3 will describe one possible initialization of the Gödel machine's proof searcher: Bias-Optimal Proof Search (BIOPS) uses a variant of the already mentioned Optimal Ordered Problem Solver OOPS [38,40] to efficiently search the space of proof techniques. Section 3 will discuss the Gödel machine's limitations, possible types of self-improvements, and additional differences from previous work.


next up previous
Next: Formal Details of a Up: Gödel Machines: Self-Referential Universal Previous: Gödel Machines: Self-Referential Universal
Juergen Schmidhuber 2003-09-29

Back to Goedel machine home page