Editorial: From sensors to human spatial concepts
Robotics and Autonomous Systems
The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Systemic interaction analysis (SInA) in HRI
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Attitude of german museum visitors towards an interactive art guide robot
Proceedings of the 6th international conference on Human-robot interaction
Leveraging the robot dialog state for visual focus of attention recognition
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
We introduce a new conversational Human-Robot-Interaction (HRI) dataset with a real-behaving robot inducing interactive behavior with and between humans. Our scenario involves a humanoid robot NAO1 explaining paintings in a room and then quizzing the participants, who are naive users. As perceiving nonverbal cues, apart from the spoken words, plays a major role in social interactions and socially-interactive robots, we have extensively annotated the dataset. It has been recorded and annotated to benchmark many relevant perceptual tasks, towards enabling a robot to converse with multiple humans, such as speaker localization and speech segmentation; tracking, pose estimation, nodding, visual focus of attention estimation in visual domain; and an audio-visual task such as addressee detection. NAO system states are also available. As compared to recordings done with a static camera, this corpus involves the head-movement of a humanoid robot (due to gaze change, nodding), posing challenges to visual processing. Also, the significant background noise present in a real HRI setting makes auditory tasks challenging.