Learning to Perceive and Act by Trial and Error
Machine Learning
ECAI '92 Proceedings of the 10th European conference on Artificial intelligence
Relational reinforcement learning
Machine Learning - Special issue on inducive logic programming
Neuro-Dynamic Programming
Reinforcement learning for POMDPs based on action values and stochastic optimization
Eighteenth national conference on Artificial intelligence
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Pengi: an implementation of a theory of activity
AAAI'87 Proceedings of the sixth National conference on Artificial intelligence - Volume 1
Learning finite-state controllers for partially observable environments
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Reinforcement Learning in Nonstationary Environment Navigation Tasks
CAI '07 Proceedings of the 20th conference of the Canadian Society for Computational Studies of Intelligence on Advances in Artificial Intelligence
An Inductive Logic Programming Approach to Statistical Relational Learning
Proceedings of the 2005 conference on An Inductive Logic Programming Approach to Statistical Relational Learning
Utile distinctions for relational reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Learning multi-agent state space representations
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Global navigation through local reference frames
SAB'06 Proceedings of the 9th international conference on From Animals to Animats: simulation of Adaptive Behavior
Hi-index | 0.00 |
Most reinforcement learning methods operate on propositional representations of the world state. Such representations are often intractably large and generalize poorly. Using a deictic representation is believed to be a viable alternative: they promise generalization while allowing the use of existing reinforcement-learning methods. Yet, there are few experiments on learning with deictic representations reported in the literature. In this paper we explore the effectiveness of two forms of deictic representation and a naïve propositional representation in a simple blocks-world domain. We find, empirically, that the deictic representations actually worsen learning performance. We conclude with a discussion of possible causes of these results and strategies for more effective learning in domains with objects.