Generalized Queries on Probabilistic Context-Free Grammars
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to ROC analysis
Pattern Recognition Letters - Special issue: ROC analysis in pattern recognition
Learning and inferring transportation routines
Artificial Intelligence
A probabilistic plan recognition algorithm based on plan tree grammars
Artificial Intelligence
A general model for online probabilistic plan recognition
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Fast and complete symbolic plan recognition
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Learning to act using real-time dynamic programming
Artificial Intelligence
Activity recognition: linking low-level sensors to high-level intelligence
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Solving POMDPs: RTDP-bel vs. point-based algorithms
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Evaluating the robustness of activity recognition using computational causal behavior models
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Towards active event recognition
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Plan recognition is the problem of inferring the goals and plans of an agent from partial observations of her behavior. Recently, it has been shown that the problem can be formulated and solved using planners, reducing plan recognition to plan generation. In this work, we extend this model-based approach to plan recognition to the POMDP setting, where actions are stochastic and states are partially observable. The task is to infer a probability distribution over the possible goals of an agent whose behavior results from a POMDP model. The POMDP model is shared between agent and observer except for the true goal of the agent that is hidden to the observer. The observations are action sequences O that may contain gaps as some or even most of the actions done by the agent may not be observed. We show that the posterior goal distribution P(G|O) can be computed from the value function VG(b) over beliefs b generated by the POMDP planner for each possible goal G. Some extensions of the basic framework are discussed, and a number of experiments are reported.