Task-decomposition via plan parsing
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Hybrid planning for partially hierarchical domains
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Three generative, lexicalised models for statistical parsing
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
A unified cognitive architecture for physical agents
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
HTN-MAKER: learning HTNs with minimal additional knowledge engineering required
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
On natural language processing and plan recognition
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
RECYCLE: Learning looping workflows from annotated traces
ACM Transactions on Intelligent Systems and Technology (TIST)
A computational model of accelerated future learning through feature recognition
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part II
Generating diverse plans to handle unknown and partially known user preferences
Artificial Intelligence
Hi-index | 0.00 |
While much work on learning in planning focused on learning domain physics (i.e., action models), and search control knowledge, little attention has been paid towards learning user preferences on desirable plans. Hierarchical task networks (HTN) are known to provide an effective way to encode user prescriptions about what constitute good plans. However, manual construction of these methods is complex and error prone. In this paper, we propose a novel approach to learning probabilistic hierarchical task networks that capture user preferences by examining user-produced plans given no prior information about the methods (in contrast, most prior work on learning within the HTN framework focused on learning "method preconditions"--i.e., domain physics--assuming that the structure of the methods is given as input). We will show that this problem has close parallels to the problem of probabilistic grammar induction, and describe how grammar induction methods can be adapted to learn task networks. We will empirically demonstrate the effectiveness of our approach by showing that task networks we learn are able to generate plans with a distribution close to the distribution of the user-preferred plans.