A method for managing evidential reasoning in a hierarchical hypothesis space
Artificial Intelligence
A logical framework for default reasoning
Artificial Intelligence
Modeling the user's plans and goals
Computational Linguistics - Special issue on user modeling
Plan recognition for intelligent interfaces
Proceedings of the sixth conference on Artificial intelligence applications
A plan-based analysis of indirect speech acts
Computational Linguistics
A model of plan inference that distinguishes between the beliefs of actors and observers
ACL '86 Proceedings of the 24th annual meeting on Association for Computational Linguistics
A method of computing generalized Bayesian probability values for expert systems
IJCAI'83 Proceedings of the Eighth international joint conference on Artificial intelligence - Volume 1
Accounting for context in plan recognition, with application to traffic monitoring
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
The automated mapping of plans for plan recognition
UAI'94 Proceedings of the Tenth international conference on Uncertainty in artificial intelligence
Handling uncertainty during plan recognition in task-oriented consultation systems
UAI'91 Proceedings of the Seventh conference on Uncertainty in Artificial Intelligence
Intention recognition in the situation calculus and probability theory frameworks
CLIMA'05 Proceedings of the 6th international conference on Computational Logic in Multi-Agent Systems
Hi-index | 0.00 |
This paper presents a process model of plan inference for use in natural language consultation systems. It includes a strategy that can both defer unwarranted decisions about the relationship of a new action to the user's overall plan and sanction rational default inferences. The paper describes an implementation of this strategy using the Dempster-Shafer theory of evidential reasoning. Our process model overcomes a limitation of previous plan recognition systems and produces a richer model of the user's plans and goals, yet one that can be explained and justified to the user when discrepancies arise between it and what the user is actually trying to accomplish.