A survey of algorithmic methods for partially observed Markov decision processes
Annals of Operations Research
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Artificial life meets entertainment: lifelike autonomous agents
Communications of the ACM
Learning policies for sequential time and cost sensitive classification
UBDM '05 Proceedings of the 1st international workshop on Utility-based data mining
Cost-sensitive feature acquisition and classification
Pattern Recognition
Probabilistic action planning for active scene modeling in continuous high-dimensional domains
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Goal-oriented sensor selection for intelligent phones: (GOSSIP)
Proceedings of the 2011 international workshop on Situation activity & goal awareness
Evaluating POMDP rewards for active perception
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Observer effect from stateful resources in agent sensing
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
Classification is a sub-task common to many problems faced by autonomous agents. Traditional treatment of classification in the Machine Learning literature assumes that a feature vector is given as input. This ignores the essential role of an autonomous agent as a proactive information gatherer. In this paper, we present a framework for making optimal sensing and information gathering decisions with respect to classification goals by formulating the problem as a partially observable Markov decision process and solving for the optimal policy. We demonstrate the utility of this approach on a simulated meteorite collection task faced by an autonomous rover.