Speech, gaze and head motion in a face-to-face collaborative task

  • Authors:
  • Sascha Fagel;Gérard Bailly

  • Affiliations:
  • GIPSA-lab, Grenoble, France;GIPSA-lab, Grenoble, France

  • Venue:
  • Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the present work we observe two subjects interacting in a collaborative task on a shared environment. One goal of the experiment is to measure the change in behavior with respect to gaze when one interactant is wearing dark glasses and hence his/her gaze is not visible by the other one. The results show that if one subject wears dark glasses while telling the other subject the position of a certain object, the other subject needs significantly more time to locate and move this object. Hence, eye gaze - when visible - of one subject looking at a certain object speeds up the location of the cube by the other subject. The second goal of the currently ongoing work is to collect data on the multimodal behavior of one of the subjects by means of audio recording, eye gaze and head motion tracking in order to build a model that can be used to control a robot in a comparable scenario in future experiments.