next up previous
Next: Illustrated Informal Recipe for Up: OOPS on Realistic Computers Previous: Near-Bias-Optimality of Realistic OOPS

Realistic OOPS Variants for Optimization etc.

Sometimes we are not searching for a universal solver, but just intend to solve the most recent task $r_{n+1}$. E.g., for problems of fitness function maximization or optimization, the $n$-th task typically is just to find a program than outperforms the most recently found program. In such cases we should use a reduced variant of OOPS which replaces step 2 of Method 4.2 by:

2. Set $a := a_{frozen} + 1$; set $ip(r_{n+1}) := a$.

IF Try ( $qp, r_{n+1}, \{ r_{n+1} \}, 0, \frac{1}{2}$), then set $a_{last} := a$ and exit.

Note that the reduced variant still spends significant time on testing earlier solutions: the probability of any prefix that computes the address of some previously frozen program $p$ and then calls $p$ determines a lower bound on the fraction of total search time spent on $p$-like programs. Compare Observation 3.6.

Similar OOPS variants will also assign prewired fractions of the total time to the second most recent program and its prolongations, the third most recent program and its prolongations, etc. Other OOPS variants will find a program that solves, say, just the $m$ most recent tasks, where $m$ is an integer constant, etc. Yet other OOPS variants will assign more (or less) than half of the total time to the most recent code and prolongations thereof. We may also consider probabilistic OOPS variants in Speed-Prior style [54,58].

One not necessarily useful idea: Suppose the number of tasks to be solved by a single program is known in advance. Now we might think of an OOPS variant that works on all tasks in parallel, again spending half the search time on programs starting at $a_{last}$, half on programs starting at $a_{frozen}+1$; whenever one of the tasks is solved by a prolongation of $q_{a_{last}:a_{frozen}}$ (usually we cannot know in advance which task), we remove it from the current task ring and freeze the code generated so far, thus increasing $a_{frozen}$ (in contrast to Try which does not freeze programs before the entire current task set is solved). If it turns out, however, that not all tasks can be solved by a program starting at $a_{last}$, we have to start from scratch by searching only among programs starting at $a_{frozen}+1$. Unfortunately, in general we cannot guarantee that this approach of early freezing will converge.


next up previous
Next: Illustrated Informal Recipe for Up: OOPS on Realistic Computers Previous: Near-Bias-Optimality of Realistic OOPS
Juergen Schmidhuber 2004-04-15

Back to OOPS main page