Finding information and finding locations in a multimodal interface: a case study of an intelligent kiosk

  • Authors:
  • Loel Kim;Thomas L. McCauley;Melanie Polkosky;Sidney D'Mello;Sarah Craig;Bistra Nikiforova

  • Affiliations:
  • The University of Memphis, Memphis, TN;The University of Memphis, Memphis, TN;The University of Memphis, Memphis, TN;The University of Memphis, Memphis, TN;The University of Memphis, Memphis, TN;The University of Memphis, Memphis, TN

  • Venue:
  • IASTED-HCI '07 Proceedings of the Second IASTED International Conference on Human Computer Interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Increasingly, technology developers are turning to interactive, intelligent kiosks to provide routine communicative functions such as greeting and informing people as they enter public, corporate, retail, or healthcare spaces. A number of studies have found intelligent kiosks to be usable with study participants reporting them to be appealing, useful, and even entertaining. However, the field still lacks insight into the ways in which people use multimodal interfaces to seek information and accomplish tasks. The Memphis Intelligent Kiosk Initiative project, or MIKI, was designed for multimodal use and although in usability testing it exemplified good interface design in a number of areas, the complexity of multiple modalities---including animated graphics, speech technology and an avatar greeter---complicated usability testing, leaving developers seeking improved instruments. In particular, factors such as gender and technical background of the user seemed to change the way that various kiosk tasks were perceived, deficiencies were observed in speech interaction as well as the location information in a 3D animated map.