GMAC '09 Proceedings of the 6th international conference industry session on Grids meets autonomic computing
Multi-policy optimization in self-organizing systems
SOAR'09 Proceedings of the First international conference on Self-organizing architectures
Autonomic multi-policy optimization in pervasive systems: Overview and evaluation
ACM Transactions on Autonomous and Adaptive Systems (TAAS) - Special section on formal methods in pervasive computing, pervasive adaptation, and self-adaptive systems: Models and algorithms
Hi-index | 0.00 |
Large scale production grids are a major case for autonomic computing. Following the classical definition of Kephart, an autonomic computing system should optimize its own behavior in accordance with high level guidance from humans. This central tenet of this paper is that the combination of utility functions and reinforcement learning (RL) can provide a general and efficient method for dynamically allocating grid resources in order to optimize the satisfaction of both end-users and participating institutions. The flexibility of an RL-based system allows to model the state of the grid,the jobs to be scheduled, and the high-level objectives of the various actors on the grid. RL-based scheduling can seamlessly adapt its decisions to changes in the distributions ofinter-arrival time, QoS requirements, and resource availability. Moreover, it requires minimal prior knowledge about thetarget environment, including user requests and infrastructure. Our experimental results, both on a synthetic workloadand a real trace, show that RL is not only a realistic alternative to empirical scheduler design, but is able to outperform them.