Speech patterns in video-mediated conversations
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The impact of eye gaze on communication using humanoid avatars
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Why conversational agents should catch the eye
CHI '00 Extended Abstracts on Human Factors in Computing Systems
Tracking Focus of Attention in Meetings
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Structural Encoding of Human and Schematic Faces: Holistic and Part-Based Processes
Journal of Cognitive Neuroscience
Eye communication in a conversational 3D synthetic agent
AI Communications
Presence: Teleoperators and Virtual Environments
Multimodal end-of-turn prediction in multi-party meetings
Proceedings of the 2009 international conference on Multimodal interfaces
Voice activity detection from gaze in video mediated communication
Proceedings of the Symposium on Eye Tracking Research and Applications
Hi-index | 0.00 |
An experiment was conducted to investigate whether human observers use knowledge of the differences in focus of attention in multiparty interaction to identify the speaker amongst the meeting participants. A virtual environment was used to have good stimulus control. Head orientations were displayed as the only cue for focus attention. The orientations were derived from a corpus of tracked head movements. We present some properties of the relation between head orientations and speaker--listener status, as found in the corpus. With respect to the experiment, it appears that people use knowledge of the patterns in focus of attention to distinguish the speaker from the listeners. However, the human speaker identification results were rather low. Head orientations (or focus of attention) alone do not provide a sufficient cue for reliable identification of the speaker in a multiparty setting.