Planning and acting in partially observable stochastic domains
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Kernel-Based Reinforcement Learning
Machine Learning
Pruning Improves Heuristic Search for Cost-Sensitive Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Practical Reinforcement Learning in Continuous Spaces
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Journal of Artificial Intelligence Research
An approach to detecting failures automatically
Fourth international workshop on Software quality assurance: in conjunction with the 6th ESEC/FSE joint meeting
Integrating learning from examples into the search for diagnostic policies
Journal of Artificial Intelligence Research
A survey of point-based POMDP solvers
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
We describe a formal framework for diagnosis and repair problems that shares elements of the well known partially observable MOP and cost-sensitive classification models. Our cost-sensitive fault remediation model is amenable to implementation as a reinforcement-learning system, and we describe an instance-based state representation that is compatible with learning and planning in this framework. We demonstrate a system that uses these ideas to learn to efficiently restore network connectivity after a failure.