Tracking body parts of multiple people for multi-person multimodal interface

  • Authors:
  • Sébastien Carbini;Jean-Emmanuel Viallet;Olivier Bernier;Bénédicte Bascle

  • Affiliations:
  • France Télécom R&D, Lannion Cedex, France;France Télécom R&D, Lannion Cedex, France;France Télécom R&D, Lannion Cedex, France;France Télécom R&D, Lannion Cedex, France

  • Venue:
  • ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although large displays could allow several users to work together and to move freely in a room, their associated interfaces are limited to contact devices that must generally be shared. This paper describes a novel interface called SHIVA (Several-Humans Interface with Vision and Audio) allowing several users to interact remotely with a very large display using both speech and gesture. The head and both hands of two users are tracked in real time by a stereo vision based system. From the body parts position, the direction pointed by each user is computed and selection gestures done with the second hand are recognized. Pointing gesture is fused with n-best results from speech recognition taking into account the application context. The system is tested on a chess game with two users playing on a very large display.