Learning and Recognizing Human Dynamics in Video Sequences
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Understanding Purposeful Human Motion
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Achieving fluency through perceptual-symbol practice in human-robot collaboration
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Ensemble: fluency and embodiment for robots acting with humans
Ensemble: fluency and embodiment for robots acting with humans
Sphinx-4: a flexible open source framework for speech recognition
Sphinx-4: a flexible open source framework for speech recognition
Cost-Based Anticipatory Action Selection for Human–Robot Fluency
IEEE Transactions on Robotics
Sociable dining table: the effectiveness of a "konkon" interface for reciprocal adaptation
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Shimon: an interactive improvisational robotic marimba player
CHI '10 Extended Abstracts on Human Factors in Computing Systems
STB: intentional stance grounded child-dependent robot
ICSR'10 Proceedings of the Second international conference on Social robotics
Interactive improvisation with a robotic marimba player
Autonomous Robots
Performing with a system's intention: embodied cues in performer-system interaction
Proceedings of the 9th ACM Conference on Creativity & Cognition
Hi-index | 0.00 |
With the aim of fluency and efficiency in human-robot teams, we have developed a cognitive architecture based on the neuro-psychological principles of anticipation and perceptual simulation through top-down biasing. An instantiation of this architecture was implemented on a nonanthropomorphic robotic lamp, performing in a human-robot collaborative task. In a human-subject study, in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to be increasingly contributing at a similar rate. Through self-report, we find significant differences between the two conditions in the sense of team fluency, the team's improvement over time, and the robot's contribution to the efficiency and fluency. We also find difference in verbal attitudes towards the robot: most notably, subjects working with the anticipatory robot attribute more positive and more human qualities to the robot, but display increased self-blame and self-deprecation.