Sometimes we are not searching for a universal solver,
but just intend to solve the most recent task
.
E.g., for problems of fitness function maximization or
optimization, the
-th task
typically is just to find a program than outperforms
the most recently found program.
In such cases we should use a reduced variant of OOPS which
replaces step 2 of Method 4.2 by:
2. Set; set
.
IF Try (), then set
and exit.
Similar OOPS variants will also assign prewired
fractions of the
total time to the second most recent program and its prolongations,
the third most recent program and its prolongations, etc.
Other OOPS variants will find a program that solves, say, just the
most recent tasks, where
is an integer constant, etc.
Yet other OOPS variants
will assign more (or less) than half of the
total time to the most recent code and prolongations thereof.
We may also consider probabilistic OOPS variants
in Speed-Prior style
[54,58].
One not necessarily useful idea: Suppose the number of tasks to be solved by a single
program is known in advance. Now we might think of an OOPS variant
that works on all tasks in parallel,
again spending half the search time on programs starting at
,
half on programs starting at
;
whenever one of the tasks is solved
by a prolongation of
(usually we cannot know in advance which task),
we remove it from the current task ring and
freeze the code generated so far,
thus increasing
(in contrast to Try which
does not freeze programs before the entire current task set is solved).
If it turns out, however, that not all tasks can be solved
by a program starting at
, we have to
start from scratch by searching only among programs
starting at
. Unfortunately, in general we cannot
guarantee
that this approach of early freezing will converge.