Experimental evaluation of vision and speech based multimodal interfaces

  • Authors:
  • Emilio Schapira;Rajeev Sharma

  • Affiliations:
  • The Pennsylvania State University, University Park, Pennsylvania;The Pennsylvania State University, University Park, Pennsylvania

  • Venue:
  • Proceedings of the 2001 workshop on Perceptive user interfaces
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Progress in computer vision and speech recognition technologies has recently enabled multimodal interfaces that use speech and gestures. These technologies o er promising alternatives to existing interfaces because they emulate the natural way in which humans communicate. However, no systematic work has been reported that formally evaluates the new speech/gesture interfaces. This paper is concerned with formal experimental evaluation of new human-computer interactions enabled by speech and hand gestures.The paper describes an experiment conducted with 23 subjects that evaluates selection strategies for interaction with large screen displays. The multimodal interface designed for this experiment does not require the user to be in physical contact with any device. Video cameras and long range microphones are used as input for the system. Three selection strategies are evaluated and results for Different target sizes and positions are reported in terms of accuracy, selection times and user preference. Design implications for vision/speech based interfaces are inferred from these results. This study also raises new question and topics for future research.