Downward refinement and the efficiency of hierarchical problem solving
Artificial Intelligence
Making large-scale support vector machine learning practical
Advances in kernel methods
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Text Mining for Causal Relations
Proceedings of the Fifteenth International Florida Artificial Intelligence Research Society Conference
Automated Planning: Theory & Practice
Automated Planning: Theory & Practice
Identifying useful subgoals in reinforcement learning by local graph partitioning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Automatic discovery and transfer of MAXQ hierarchies
Proceedings of the 25th international conference on Machine learning
Learning Language from Its Perceptual Context
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Expressivity of STRIPS-Like and HTN-Like Planning
KES-AMSTA '07 Proceedings of the 1st KES International Symposium on Agent and Multi-Agent Systems: Technologies and Applications
Using a Bigram Event Model to Predict Causal Potential
CICLing '09 Proceedings of the 10th International Conference on Computational Linguistics and Intelligent Text Processing
On the integration of grounding language and learning objects
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Learning to connect language and perception
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
The FF planning system: fast plan generation through heuristic search
Journal of Artificial Intelligence Research
PDDL2.1: an extension to PDDL for expressing temporal planning domains
Journal of Artificial Intelligence Research
Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic
Journal of Artificial Intelligence Research
Reinforcement learning for mapping instructions to actions
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Learning semantic correspondences with less supervision
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Intentional context in situated natural language learning
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
Information Processing and Management: an International Journal
Learning to follow navigational directions
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Reading between the lines: learning to map high-level instructions to commands
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Learning to win by reading manuals in a Monte-Carlo framework
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Minimally supervised event causality identification
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
DetH: approximate hierarchical solution of large Markov decision processes
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Action-model acquisition from noisy plan traces
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Comprehending action preconditions and effects is an essential step in modeling the dynamics of the world. In this paper, we express the semantics of precondition relations extracted from text in terms of planning operations. The challenge of modeling this connection is to ground language at the level of relations. This type of grounding enables us to create high-level plans based on language abstractions. Our model jointly learns to predict precondition relations from text and to perform high-level planning guided by those relations. We implement this idea in the reinforcement learning framework using feedback automatically obtained from plan execution attempts. When applied to a complex virtual world and text describing that world, our relation extraction technique performs on par with a supervised baseline, yielding an F-measure of 66% compared to the baseline's 65%. Additionally, we show that a high-level planner utilizing these extracted relations significantly outperforms a strong, text unaware baseline -- successfully completing 80% of planning tasks as compared to 69% for the baseline.