1. Toss an unbiased coin until heads is up; let denote the number of required trials; set .
2. If the number of steps executed so far exceeds then exit. Execute one step; if this leads to a request for a new input bit (of the growing selfdelimiting program, e.g., ), toss the coin to determine the bit, and set .
3. Go to 2.
In the spirit of FAST, algorithm GUESS makes twice the computation time half as likely, and splits remaining time in half whenever a new input bit is requested, to assign equal runtime to the two resulting sets of possible program continuations. Note that the expected runtime of GUESS is unbounded since does not converge. Still, each invocation of GUESS terminates with probability 1.
Algorithm GUESS is almost identical to a probabilistic search algorithm used in previous work on applied inductive inference [19,21]. The programs generated by the previous algorithm, however, were not bitstrings but written in an assembler-like language; their runtimes had an upper bound, and the program outputs were evaluated as to whether they represented solutions to externally given tasks. Using a small set of exemplary training examples, the system discovered the weight matrix of an artificial neural network whose task was to map input data to appropriate target classifications. The network's generalization capability was then tested on a much larger unseen test set. On several toy problems it generalized extremely well in a way unmatchable by traditional neural network learning algorithms.
The previous papers, however, did not explicitly establish the relation mentioned above between ``optimal'' resource bias and GUESS.