Error recovery in a blended style eye gaze and speech interface

  • Authors:
  • Yeow Kee Tan;Nasser Sherkat;Tony Allen

  • Affiliations:
  • Nottingham Trent University, Nottingham, UK;Nottingham Trent University, Nottingham, UK;Nottingham Trent University, Nottingham, UK

  • Venue:
  • Proceedings of the 5th international conference on Multimodal interfaces
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the work carried out earlier [1][2], it was found that an eye gaze and speech enabled interface was the most preferred form of data entry method when compared to other methods such as mouse and keyboard, handwriting and speech only. It was also found that several non-native United Kingdom (UK) English speaking speakers did not prefer the eye gaze and speech system due to the low success rate caused by the inaccuracy of the speech recognition component. Hence in order to increase the usability of the eye gaze and speech data entry system for these users, error recovery methods are required. In this paper we present three different multimodal interfaces that employ the use of speech recognition and eye gaze tracking within a virtual keypad style interface to allow for the use of error recovery (re-speak with keypad, spelling with keypad and re-speak and spelling with keypad). Experiments show that through the use of this virtual keypad interface, an accuracy gain of 10.92% during first attempt and 6.20% during re-speak by non-native speakers in ambiguous fields (initials, surnames, city and alphabets) can be achieved [3]. The aim of this work is to investigate whether the usability of the eye gaze and speech system can be improved through one of these three multimodal blended multimodal error recovery methods.