Simpleflow: enhancing gestural interaction with gesture prediction, abbreviation and autocompletion

  • Authors:
  • Mike Bennett;Kevin McCarthy;Sile O'Modhrain;Barry Smyth

  • Affiliations:
  • SCIEN, Department Of Psychology, Stanford University and School of Computer Science, University College Dublin, Ireland;School of Computer Science, University College Dublin, Ireland;Sonic Arts Research Centre, Queens University, Belfast, UK;School of Computer Science, University College Dublin, Ireland

  • Venue:
  • INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Gestural interfaces are now a familiar mode of user interaction and gestural input is an important part of the way that users can interact with such interfaces. However, entering gestures accurately and efficiently can be challenging. In this paper we present two styles of visual gesture autocompletion for 2D predictive gesture entry. Both styles enable users to abbreviate gestures. We experimentally evaluate and compare both styles of visual autocompletion against each other and against non-predictive gesture entry. The best performing visual autocompletion is referred to as SimpleFlow. Our findings establish that users of SimpleFlow take significant advantage of gesture autocompletion by entering partial gestures rather than whole gestures. Compared to nonpredictive gesture entry, users enter partial gestures that are 41% shorter than the complete gestures, while simultaneously improving the accuracy (+13%, from 68% to 81%) and speed (+10%) of their gesture input. The results provide insights into why SimpleFlow leads to significantly enhanced performance, while showing how predictive gestures with simple visual autocompletion impacts upon the gesture abbreviation, accuracy, speed and cognitive load of 2D predictive gesture entry.