Head-computer interface: a multimodal approach to navigate through real and virtual worlds

  • Authors:
  • Francesco Carrino;Julien Tscherrig;Elena Mugellini;Omar Abou Khaled;Rolf Ingold

  • Affiliations:
  • College of Engineering and Architecture of Fribourg and University of Fribourg, Switzerland;College of Engineering and Architecture of Fribourg, Switzerland;College of Engineering and Architecture of Fribourg, Switzerland;College of Engineering and Architecture of Fribourg, Switzerland;University of Fribourg, Switzerland

  • Venue:
  • HCII'11 Proceedings of the 14th international conference on Human-computer interaction: interaction techniques and environments - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel approach for multimodal interaction which combines user mental activity (thoughts and emotions), user facial expressions and user head movements. In order to avoid problems related to computer vision (sensitivity to lighting changes, reliance on camera position, etc.), the proposed approach doesn't make use of optical techniques. Furthermore, in order to make human communication and control smooth, and avoid other environmental artifacts, the used information is non-verbal. The head's movements (rotations) are detected by a bi-axial gyroscope; the expressions and gaze are identified by electromyography and electrooculargraphy; the emotions and the thoughts are monitored by electroencephalography. In order to validate the proposed approach we developed an application where the user can navigate through a virtual world using his head. We chose Google Street View as virtual world. The developed application was conceived for a further integration with a electric wheelchair in order to replace the virtual world with a real world. A first evaluation of the system is provided.