|
|
|
|
|
|
|
|
|
|
| |
| Duration |
|
2007-2008
|
| Funding |
|
Hasler foundation (http://www.haslerstiftung.ch/)
|
|
| Partners |
|
|
|
| |
|
|
|
|
|
|
| |
|
|
Distributed Algorithm Portfolios
|
|
Existing parallel and distributed computing systems (e.g., Condor, Globus, JOpera), are aimed at improving speed and reliability of computation, but require the user to specify which computations should be carried out. In a more realistic situation, a set of alternative algorithms, of unknown performance, is available for a given class of problems, and one would like to automate the process of selecting which algorithm to use, independently for each problem instance, with the aim of minimizing runtime.
This automated algorithm selection is an old dream of the AI community, which has
been brought closer to reality in the last decade. Even though a number of methods are
already available, their practical use is still quite limited, also because of the lack of a standard interface: most researchers and practitioners still prefer a "trial and error"
approach, rather than having to implement an algorithm selection method themselves,
or interface with an existing one. Another issue that scares users away is the huge cost
of learning models of algorithm performance, that can be used for selection, as this
usually requires a prohibitively expensive offline training sequence, during which each
problem is solved with each of the available algorithms.
In our recent work we have developed an online algorithm selection technique, in
which a model of algorithm performance is learned incrementally while being used.
The resulting exploration-exploitation trade-off is treated as a bandit problem. Non-parametric estimation techniques from survival analysis allow to further reduce training cost, solving each problem once, and accounting the time spent by unsuccessful algorithms as incomplete information. Our approach fits in the more general framework of algorithm portfolios: the candidate solvers are run in parallel, on a single machine, and computation time is shared among them according to their expected performances.
In this project, we will extend our selection technique to the more interesting and
practical case of multiple CPUs, to be allocated to multiple copies of the available
algorithms, and implement an adaptive framework for distributed
problem solving.
|
| People |
Matteo Gagliolo
|
|
Coordinator
|
Jürgen Schmidhuber
|
|
|
|
|
|
home |
people |
jobs |
projects |
tech reports |
contact |
links |
talks
|
| |
Copyright © 2003 - 2012 IDSIA. All Rights Reserved.
|