Evolving robot behavior via interactive evolutionary computation: from real-world to simulation
Proceedings of the 2001 ACM symposium on Applied computing
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Daily HRI evaluation at a classroom environment: reports from dance interaction experiments
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Cheek to Chip: Dancing Robots and AI's Future
IEEE Intelligent Systems
Hi-index | 0.00 |
In this paper, we investigate an approach for robots to extract the preferences of human observers, and combine them to generate new moves in order to improve robot dancing. Human preferences can be extracted even when a reward is given a few steps after a dance movement. With the feedback the robots perform more of what was preferred and less of what was not preferred. Human observers watch the robot generated dance movements and provide feedback in real time; then the robot learns the observers' preferences and creates new dance movements based on varying percentage of their preferences; and finally the observers rate the new robot's dancing. Experimental results show that the robot learns, using Interactive Reinforcement Learning, the expressed preferences of human observers and dance routines based on preferences of multiple observers are rated more highly.