Optimal Dynamic Control of Resources in a Distributed System
IEEE Transactions on Software Engineering
Guarded repair of dependable systems
Theoretical Computer Science - Special issue on dependable parallel computing
Open, Closed, and Mixed Networks of Queues with Different Classes of Customers
Journal of the ACM (JACM)
Performance Guarantees for Web Server End-Systems: A Control-Theoretical Approach
IEEE Transactions on Parallel and Distributed Systems
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
iMobile EE: an enterprise mobile service platform
Wireless Networks
Constructing Adaptive Software in Distributed Systems
ICDCS '01 Proceedings of the The 21st International Conference on Distributed Computing Systems
Improved Prediction for Web Server Delay Control
ECRTS '04 Proceedings of the 16th Euromicro Conference on Real-Time Systems
Hi-index | 0.00 |
Constructing adaptive software that is capable of changing behavior at runtime is a challenging software engineering problem. However, the problem of determining when and how such a system should adapt, i.e., the system's adaptation policy, can be even more challenging. To optimize the behavior of a system over its lifetime, the policy must often take into account not only the current system state, but also the anticipated future behavior of the system. This paper presents a systematic approach based on using Markov Decision Processes to model the system and to generate optimal adaptation policies for it. In our approach, we update the model on-line based on system measurements and generate updated adaptation policies at runtime when necessary. We present the general approach and then outline its application to a distributed message dissemination system based on AT&T's iMobile platform.