Acting optimally in partially observable stochastic domains
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Neuro-Dynamic Programming
Scalable Internal-State Policy-Gradient Methods for POMDPs
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
PEGASUS: A policy search method for large MDPs and POMDPs
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Experiences with a mobile robotic guide for the elderly
Eighteenth national conference on Artificial intelligence
Active Gesture Recognition Using Partially Observable markov Decision Processes
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume III-Volume 7276 - Volume 7276
Algorithms for partially observable markov decision processes
Algorithms for partially observable markov decision processes
Spoken dialogue management using probabilistic reasoning
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Speeding up the convergence of value iteration in partially observable Markov decision processes
Journal of Artificial Intelligence Research
Point-based value iteration: an anytime algorithm for POMDPs
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
A decision-theoretic approach to task assistance for persons with dementia
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Learning finite-state controllers for partially observable environments
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
IEEE Transactions on Information Technology in Biomedicine
Partially observable Markov decision processes for spoken dialog systems
Computer Speech and Language
Point-Based Value Iteration for Continuous POMDPs
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Value-based observation compression for DEC-POMDPs
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Using Event Calculus for Behaviour Reasoning and Assistance in a Smart Home
ICOST '08 Proceedings of the 6th international conference on Smart Homes and Health Telematics
Representing systems with hidden state
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Perseus: randomized point-based value iteration for POMDPs
Journal of Artificial Intelligence Research
A decision-theoretic approach to task assistance for persons with dementia
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Computer Vision and Image Understanding
Representing uncertainty about complex user goals in statistical dialogue systems
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Efficient planning under uncertainty with macro-actions
Journal of Artificial Intelligence Research
Goal-oriented sensor selection for intelligent phones: (GOSSIP)
Proceedings of the 2011 international workshop on Situation activity & goal awareness
International Workshop on Situation, Activity and Goal Awareness (SAGAware 2012)
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
People, sensors, decisions: Customizable and adaptive technologies for assistance in healthcare
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on highlights of the decade in interactive intelligent systems
Tractable POMDP representations for intelligent tutoring systems
ACM Transactions on Intelligent Systems and Technology (TIST) - Special section on agent communication, trust in multiagent systems, intelligent tutoring and coaching systems
Ontology-based Activity Recognition Framework and Services
Proceedings of International Conference on Information Integration and Web-based Applications & Services
Isomorph-free branch and bound search for finite state controllers
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
We describe methods to solve partially observable Markov decision processes (POMDPs) with continuous or large discrete observation spaces. Realistic problems often have rich observation spaces, posing significant problems for standard POMDP algorithms that require explicit enumeration of the observations. This problem is usually approached by imposing an a priori discretisation on the observation space, which can be sub-optimal for the decision making task. However, since only those observations that would change the policy need to be distinguished, the decision problem itself induces a lossless partitioning of the observation space. This paper demonstrates how to find this partition while computing a policy, and how the resulting discretisation of the observation space reveals the relevant features of the application domain. The algorithms are demonstrated on a toy example and on a realistic assisted living task.