Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Understanding Natural Language
Understanding Natural Language
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Understanding natural language instructions: the case of purpose clauses
ACL '92 Proceedings of the 30th annual meeting on Association for Computational Linguistics
Automatic optimization of dialogue management
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Spoken dialogue management using probabilistic reasoning
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Learning to sportscast: a test of grounded language acquisition
Proceedings of the 25th international conference on Machine learning
On the integration of grounding language and learning objects
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Learning to connect language and perception
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic
Journal of Artificial Intelligence Research
Intentional context in situated natural language learning
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
Reading to learn: constructing features from semantic abstracts
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2
Following directions using statistical machine translation
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Learning to follow navigational directions
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Reading between the lines: learning to map high-level instructions to commands
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Training a multilingual sportscaster: using perceptual context to learn language
Journal of Artificial Intelligence Research
Driving semantic parsing from the world's response
CoNLL '10 Proceedings of the Fourteenth Conference on Computational Natural Language Learning
A game-theoretic approach to generating spatial descriptions
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Natural language generation as planning under uncertainty for spoken dialogue systems
Empirical methods in natural language generation
A reinforcement learning framework for answering complex questions
Proceedings of the 16th international conference on Intelligent user interfaces
Improving our reviewing processes
Computational Linguistics
Learning to win by reading manuals in a Monte-Carlo framework
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Learning dependency-based compositional semantics
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Confidence driven unsupervised semantic parsing
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Bootstrapping semantic parsers from conversations
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Learning from natural instructions
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Learning to win by reading manuals in a monte-carlo framework
Journal of Artificial Intelligence Research
Learning high-level planning from text
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Spice it up?: mining refinements to online instructions from user generated content
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Corpus-based interpretation of instructions in virtual environments
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2
Learning to interpret natural language instructions
SIAC '12 Proceedings of the Second Workshop on Semantic Interpretation in an Actionable Context
Toward learning perceptually grounded word meanings from unaligned parallel data
SIAC '12 Proceedings of the Second Workshop on Semantic Interpretation in an Actionable Context
Unsupervised PCFG induction for grounded language learning with highly ambiguous supervision
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Learning dependency-based compositional semantics
Computational Linguistics
The ontology lifecycle in RoboCup: population from text and execution
Robot Soccer World Cup XV
Learning from natural instructions
Machine Learning
Hi-index | 0.00 |
In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains --- Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples.