Turing Machines: Monotone TMs (MTMs), General TMs (GTMs), Enumerable Output Machines (EOMs)
Next: Infinite Computations, Convergence, Formal Up: Preliminaries Previous: Notation

Turing Machines: Monotone TMs (MTMs), General TMs (GTMs), Enumerable Output Machines (EOMs)

The standard model of theoretical computer science is the Turing Machine (TM). It allows for emulating any known computer. For technical reasons we will consider several types of TMs.

Monotone TMs (MTMs). Most current theory of description size and inductive inference is based on MTMs (compare [#!LiVitanyi:97!#, p. 276 ff]) with several tapes, each tape being a finite chain of adjacent squares with a scanning head initially pointing to the leftmost square. There is one output tape and at least two work tapes (sufficient to compute everything traditionally regarded as computable). The MTM has a finite number of internal states, one of them being the initial state. MTM behavior is specified by a lookup table mapping current state and contents of the squares above work tape scanning heads to a new state and an instruction to be executed next. There are instructions for shifting work tape scanning heads one square left or right (appending new squares when necessary), and for writing 0 or 1 on squares above work tape scanning heads. The only input-related instruction requests an input bit determined by an external process and copies it onto the square above the first work tape scanning head. There may or may not be a halt instruction to terminate a computation. Sequences of requested input bits are called self-delimiting programs because they convey all information about their own length, possibly causing the MTM to halt [#!Levin:74!#,#!Gacs:74!#,#!Chaitin:75!#], or at least to cease requesting new input bits (the typical case in this paper). MTMs are called monotone because they have a one-way write-only output tape -- they cannot edit their previous output, because the only ouput instructions are: append a new square at the right end of the output tape and fill it with 0/1.

General TMs (GTMs). GTMs are like MTMs but have additional output instructions to edit their previous output. Our motivation for introducing GTMs is that certain bitstrings are compactly describable on nonhalting GTMs but not on MTMs, as will be seen later. This has consequences for definitions of individual describability and probability distributions on describable things. The additional instructions are: (a) shift output scanning head right/left (but not out of bounds); (b) delete square at the right end of the output tape (if it is not the initial square or above the scanning head); (c) write 1 or 0 on square above output scanning head. Compare Burgin's inductive TMs and super-recursive algorithms [#!Burgin:83!#,#!Burgin:91!#].

Enumerable Output Machines (EOMs). Like GTMs, EOMs can edit their previous output, but not such that it decreases lexicographically. The expressive power of EOMs lies in between those of MTMs and GTMs, with interesting computability-related properties whose analogues do not hold for GTMs. EOMs are like MTMs, except that the only permitted output instruction sequences are: (a) shift output tape scanning head left/right unless this leads out of bounds; (b) replace bitstring starting above the output scanning head by the string to the right of the scanning head of the second work tape, readjusting output tape size accordingly, but only if this lexicographically increases the contents of the output tape. The necessary test can be hardwired into the finite TM transition table.

Next: Infinite Computations, Convergence, Formal Up: Preliminaries Previous: Notation
Juergen Schmidhuber
2001-01-09

Related links: In the beginning was the code! - Zuse's thesis - Life, the universe, and everything - Generalized Algorithmic Information - Speed Prior - The New AI