A connectionist machine for genetic hillclimbing
A connectionist machine for genetic hillclimbing
Global optimization
Computational fluid dynamics on parallel processors
Computers and Fluids
Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms
Evolution and Optimum Seeking: The Sixth Generation
Evolution and Optimum Seeking: The Sixth Generation
Contemporary Evolution Strategies
Proceedings of the Third European Conference on Advances in Artificial Life
Metamodel-Assisted Evolution Strategies
PPSN VII Proceedings of the 7th International Conference on Parallel Problem Solving from Nature
The workload on parallel supercomputers: modeling the characteristics of rigid jobs
Journal of Parallel and Distributed Computing
Cooperative Multi-Agent Learning: The State of the Art
Autonomous Agents and Multi-Agent Systems
Prediction, Learning, and Games
Prediction, Learning, and Games
Exponential weight algorithm in continuous time
Mathematical Programming: Series A and B - Nonlinear convex optimization and variational inequalities
Parallelism and evolutionary algorithms
IEEE Transactions on Evolutionary Computation
Parallel predator---prey interaction for evolutionary multi-objective optimization
Natural Computing: an international journal
Hi-index | 0.00 |
We investigate a dynamic, adaptive resource allocation scheme with the aim of accelerating the convergence of multi-start population-based search heuristics (PSHs) running on multiple parallel processors. Given that each initialization of a PSH performs differently over time, we develop an exponential learning scheme which allocates computational resources (processors) to each variant in an online manner, based on the performance level attained by each initialization. For the well-known example of (mu+lambda)-evolution strategies, we show that the time required to reach the target quality level of a given optimization problem is significantly reduced and that the utilization of the parallel system is likewise optimized. Our learning approach is easily implementable with currently available batch management systems and provides notable performance improvements without modifying the employed PSH, so it is very well-suited to improve the performance of PSHs in large-scale parallel computing environments.