CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Relational reinforcement learning
Machine Learning - Special issue on inducive logic programming
Grounding symbols through evolutionary language games
Simulating the evolution of language
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Learning symbolic models of stochastic domains
Journal of Artificial Intelligence Research
Active learning with statistical models
Journal of Artificial Intelligence Research
Transparent active learning for robots
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Planning with noisy probabilistic relational rules
Journal of Artificial Intelligence Research
Designing robot learners that ask good questions
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Designing Interactions for Robot Active Learners
IEEE Transactions on Autonomous Mental Development
Hi-index | 0.00 |
We investigate an interactive teaching scenario, where a human teaches a robot symbols which abstract the geometric properties of objects. There are multiple motivations for this scenario: First, state-of-the-art methods for relational reinforcement learning demonstrate that we can learn and employ strongly generalizing abstract models with great success for goal-directed object manipulation. However, these methods rely on given grounded action and state symbols and raise the classical question: Where do the symbols come from? Second, existing research on learning from human-robot interaction has focused mostly on the motion level (e.g., imitation learning). However, if the goal of teaching is to enable the robot to autonomously solve sequential manipulation tasks in a goal-directed manner, the human should have the possibility to teach the relevant abstractions to describe the task and let the robot eventually leverage powerful relational RL methods. In this paper we formalize human-robot teaching of grounded symbols as an active learning problem, where the robot actively generates pick-and-place geometric situations that maximize its information gain about the symbol to be learned. We demonstrate that the learned symbols can be used by a robot in a relational RL framework to learn probabilistic relational rules and use them to solve object manipulation tasks in a goal-directed manner.