Language technology in a predictive, restricted on-screen keyboard with dynamic layout for severely disabled people

  • Authors:
  • Anders Sewerin Johansen;John Paulin Hansen;Dan Witzner Hansen;Kenji Itoh;Satoru Mashino

  • Affiliations:
  • The IT University of Copenhagen, Copenhagen, Denmark;The IT University of Copenhagen, Copenhagen, Denmark;The IT University of Copenhagen, Copenhagen, Denmark;Tokyo Inst. of Technology, Tokyo, Japan;Tokyo Inst. of Technology, Tokyo, Japan

  • Venue:
  • TextEntry '03 Proceedings of the 2003 EACL Workshop on Language Modeling for Text Entry Methods
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes the GazeTalk augmentative and alternative communications (AAC) system, and presents results from two user studies of initial typing rates among novice users. GazeTalk can be operated using eye tracking, mouse or other pointing devices. The system presents the user with a user interface that is based on 12 large on-screen buttons. GazeTalk supports a wide range of configurations, including several variants of probabilistic or ambiguous/clustered keyboards. The language model used in GazeTalk is based on a corpus constructed on the basis of text extracted from Usenet discussion groups. The results from the user studies indicated that the prediction-based input system was less efficient than a static layout. However, user comments suggest that this was mainly caused by design related factors, which were not directly related to the basic design principles. In the next design iteration, we aim to improve the design by eliminating the problems and to increase the quality of the language model by using a significantly larger training corpus.