A study of manual gesture-based selection for the PEMMI multimodal transport management interface

  • Authors:
  • Fang Chen;Eric Choi;Julien Epps;Serge Lichman;Natalie Ruiz;Yu Shi;Ronnie Taib;Mike Wu

  • Affiliations:
  • National ICT Australia Sydney, Australia;National ICT Australia Sydney, Australia;National ICT Australia Sydney, Australia;National ICT Australia Sydney, Australia;National ICT Australia Sydney, Australia and The University of New South Wales Sydney, Australia;National ICT Australia Sydney, Australia;National ICT Australia Sydney, Australia and The University of New South Wales Sydney, Australia;National ICT Australia Sydney, Australia

  • Venue:
  • ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Operators of traffic control rooms are often required to quickly respond to critical incidents using a complex array of multiple keyboards, mice, very large screen monitors and other peripheral equipment. To support the aim of finding more natural interfaces for this challenging application, this paper presents PEMMI (Perceptually Effective Multimodal Interface), a transport management system control prototype taking video-based manual gesture and speech recognition as inputs. A specific theme within this research is determining the optimum strategy for gesture input in terms of both single-point input selection and suitable multimodal feedback for selection. It has been found that users tend to prefer larger selection areas for targets in gesture interfaces, and tend to select within 44% of this selection radius. The minimum effective size for targets when using 'device-free' gesture interfaces was found to be 80 pixels (on a 1280x1024 screen). This paper also shows that feedback on gesture input via large screens is enhanced by the use of both audio and visual cues to guide the user's multimodal input. Audio feedback in particular was found to improve user response time by an average of 20% over existing gesture selection strategies for multimodal tasks.