Wizard of Oz studies: why and how
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Virtual petz (video session): a hybrid approach to creating autonomous, lifelike dogz and catz
AGENTS '98 Proceedings of the second international conference on Autonomous agents
A social reinforcement learning agent
Proceedings of the fifth international conference on Autonomous agents
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Integrated learning for interactive synthetic characters
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Recognition of Affective Communicative Intent in Robot-Directed Speech
Autonomous Robots
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
Robotic vocabulary building using extension inference and implicit contrast
Artificial Intelligence
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Emotion and reinforcement: affective facial expressions facilitate robot learning
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Learning and interacting in human-robot domains
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
No fair!!: an interaction with a cheating robot
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
How do you play with a robotic toy animal?: a long-term study of Pleo
Proceedings of the 9th International Conference on Interaction Design and Children
A formal framework for combining natural instruction and demonstration for end-user programming
Proceedings of the 16th international conference on Intelligent user interfaces
How do you like me in this: user embodiment preferences for companion agents
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
A bayesian network approach to investigating user-robot personality matching
AMT'12 Proceedings of the 8th international conference on Active Media Technology
Hi-index | 0.00 |
We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learner's performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naively and spontaneously use intensely affective prosody. Our findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and path-independent for social learning interactions. We also propose an interaction taxonomy that describes three phases of a human-teacher's vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action.