In 1931 Kurt Gödel used elementary arithmetics to build a universal programming language for encoding arbitrary proofs, given an arbitrary enumerable set of axioms. He went on to construct self-referential formal statements that claim their own unprovability, using Cantor's diagonalization trick [5] to demonstrate that formal systems such as traditional mathematics are either flawed in a certain sense or contain unprovable but true statements [10].
Since Gödel's exhibition of the fundamental limits of proof and computation, and Konrad Zuse's subsequent construction of the first working programmable computer (1935-1941), there has been a lot of work on specialized algorithms solving problems taken from more or less general problem classes. Apparently, however, one remarkable fact has so far escaped the attention of computer scientists: it is possible to use self-referential proof systems to build optimally efficient yet conceptually very simple universal problem solvers.
Many traditional problems of computer science require just one problem-defining input at the beginning of the problem solving process. For example, the initial input may be a large integer, and the goal may be to factorize it. In what follows, however, we will also consider the more general case where the problem solution requires interaction with a dynamic, initially unknown environment that produces a continual stream of inputs and feedback signals, such as in autonomous robot control tasks, where the goal may be to maximize expected cumulative future reward [18] (examples in Section 3.2). This may require the solution of essentially arbitrary problems.
Neither
Levin's universal search [22] nor its
incremental extension, the Optimal Ordered Problem Solver
[38,40],
nor Solomonoff's recent ideas [48]
are `universal enough' for such general setups,
and our earlier self-modifying online learning systems
[29,32,44,43,45]
are not necessarily optimal.
Hutter's recent AIXI model [15]
does execute optimal actions in very general environments
evolving according to arbitrary, unknown, yet
computable probabilistic laws,
but only under the unrealistic assumption of unlimited
computation time.
AIXI's asymptotically optimal,
space/time-bounded cousin AIXI
[15]
may be the system conceptually closest to the one pesented here.
In discrete cycle
of AIXI
's lifetime,
action
results in perception
and reward
,
where all quantities may depend on the complete history.
Using a universal computer such as a Turing machine [50],
AIXI
needs an initial offline
setup phase (prior to interaction with the environment) to
examine all proofs of length at
most
, filtering out those that identify programs (of maximal
size
and maximal runtime
per cycle) which not only
could interact with the environment but which for
all possible interaction histories
also correctly predict a lower bound of their own expected reward.
In cycle
, AIXI
then runs all programs
identified in the
setup phase (at most
), finds the one with highest self-rating,
and executes its corresponding action.
The problem-independent setup time (where almost all of the work is done)
is
, and the online computation
time per cycle is
.
AIXI
is related to
Hutter's `fastest' algorithm for all well-defined problems
(HSEARCH [16])
which also uses a general proof searcher, and
also is asymptotically optimal in a certain sense.
Assume discrete input/output domains
, a formal problem
specification
(say, a functional description of how integers are decomposed
into their prime factors),
and a particular
(say,
an integer to be factorized). HSEARCH
orders all proofs of an appropriate axiomatic system
by size to find programs
that
for all
provably compute
within time bound
. Simultaneously it
spends most of its time on executing the
with the
best currently proven time bound
.
It turns out that HSEARCH
is as fast as the fastest algorithm that provably
computes
for all
, save for a constant factor
smaller than
(arbitrary
)
and an
-specific but
-independent
additive constant [16].
That is, HSEARCH and AIXI
boast an optimal order of complexity.
This somewhat limited notion of optimality, however,
can be misleading
despite its wide use in theoretical computer science,
as it hides the possibly
huge but problem-independent
constants which could make AIXI
and HSEARCH
practically infeasible.
Our novel Gödel machine1derives its name and its power from exploiting provably useful changes of any part of its own code in self-referential fashion. Its theoretical advantages over previous approaches can be traced back to the fact that its notion of optimality is less restricted and that it has no unmodifiable software at all. Its bootstrap mechanism is based on a simple idea. We provide it with an axiomatic description of (possibly stochastic) environmental properties and of its goals and means. The latter not only include some initial problem soving strategy but also a systematic proof searcher seeking an algorithm for modifying the current Gödel machine together with a formal proof that the execution of this algorithm will improve the Gödel machine, according to some utility function or optimality criterion represented as part of the goals. In particular, utility may take into account expected computational costs of proof searching and other actions, to be derived from the axioms. The self-improvement strategy is not `greedy': assuming consistency of the axiomatic system, some self-improvement's usefulness will be provable only if it can be shown that the current proof searcher will not find an even better self-improvement sufficiently quickly. In this sense any executed self-improvement will be globally optimal--it will be the best of all possible relevant self-improvements, relative to the given resource limitations and the initial proof search strategy.
Unlike AIXI
and HSEARCH, the Gödel machine
can improve the proof searcher itself.
Unlike HSEARCH it does not waste
time on finding programs that provably compute
for all
when the current
is the only object of interest.
While the hardwired brute force theorem provers
of AIXI
and HSEARCH systematically search in raw proof
space--they can hide
the proof search cost (exponential in proof size)
in their asymptotic optimality notation
[16,15]--the
initial, not yet self-improved Gödel machine
already produces many proofs much faster
as it searches among online
proof techniques: proof-generating programs that
may read the Gödel machine's current state.
Hence, unlike AIXI
, the Gödel machine can profit
from online proof search which may exploit
information obtained through interaction with the environment.
This approach raises several unconventional issues concerning the connection between syntax and semantics though. Proofs are just symbol strings produced from other symbol strings according to certain syntactic rules. Such a symbol string, however, may be interpreted in online fashion as a statement about the computational costs of the program that computes it (a semantic issue), and may suggest a Gödel machine-modifying algorithm whose execution would be semantically useful right now as the proof is being created. The proof searcher must deal with the fact that the utility of certain self-modifications may depend on the remaining lifetime, and with the problem of producing the right proof at the right time.
Section 2 will formally describe details of a particular Gödel machine, focusing on its novel aspects, skipping over well-known standard issues treated by any proof theory textbook. Section 2.2.1 will introduce the essential instructions invoked by proof techniques to compute axioms and theorems and to relate syntax to semantics. Section 2.3 will describe one possible initialization of the Gödel machine's proof searcher: Bias-Optimal Proof Search (BIOPS) uses a variant of the already mentioned Optimal Ordered Problem Solver OOPS [38,40] to efficiently search the space of proof techniques. Section 3 will discuss the Gödel machine's limitations, possible types of self-improvements, and additional differences from previous work.