Optimal software rejuvenation for tolerating soft failures
Performance Evaluation
Complexity of finite-horizon Markov decision process problems
Journal of the ACM (JACM)
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
A Bayesian Framework for Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Software Rejuvenation: Analysis, Module and Applications
FTCS '95 Proceedings of the Twenty-Fifth International Symposium on Fault-Tolerant Computing
Optimal Software Rejuvenation Policy with Discounting
PRDC '01 Proceedings of the 2001 Pacific Rim International Symposium on Dependable Computing
Adaptive Service Composition in Flexible Processes
IEEE Transactions on Software Engineering
A framework for QoS-aware binding and re-binding of composite web services
Journal of Systems and Software
Online Optimization in Application Admission Control for Service Oriented Systems
APSCC '08 Proceedings of the 2008 IEEE Asia-Pacific Services Computing Conference
Markov-HTN Planning Approach to Enhance Flexibility of Automatic Web Service Composition
ICWS '09 Proceedings of the 2009 IEEE International Conference on Web Services
Optimal Replacement Policy of Services Based on Markov Decision Process
SCC '09 Proceedings of the 2009 IEEE International Conference on Services Computing
Model based Bayesian exploration
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
In the service computing paradigm, a service broker can build new applications by composing network-accessible services offered by loosely coupled independent providers. In this paper, we address the problem of providing a service broker, which offers to prospective users a composite service with a range of different Quality of Service (QoS) classes, with a forward-looking admission control policy based on Markov Decision Processes (MDP). This mechanism allows the broker to decide whether to accept or reject a new potential user in such a way to maximize its gain while guaranteeing non-functional QoS requirements to its already admitted users. We model the broker using a continuous-time MDP and consider various techniques suitable to solve both infinite-horizon and finitehorizon MDPs. To assess the effectiveness of the MDP-based admission control for the service broker, we present simulation results where we compare the optimal decisions obtained by the analytical solution of the MDP with other admission control policies. To deal with large problem instances, we also propose a heuristic policy for the MDP solution.