Understanding Multi-touch Manipulation for Surface Computing

  • Authors:
  • Chris North;Tim Dwyer;Bongshin Lee;Danyel Fisher;Petra Isenberg;George Robertson;Kori Inkpen

  • Affiliations:
  • Virginia Tech, Blacksburg, USA;Microsoft Research, Redmond, USA;Microsoft Research, Redmond, USA;Microsoft Research, Redmond, USA;University of Calgary, Alberta, Canada;Microsoft Research, Redmond, USA;Microsoft Research, Redmond, USA

  • Venue:
  • INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Two-handed, multi-touch surface computing provides a scope for interactions that are closer analogues to physical interactions than classical windowed interfaces. The design of natural and intuitive gestures is a difficult problem as we do not know how users will approach a new multi-touch interface and which gestures they will attempt to use. In this paper we study whether familiarity with other environments influences how users approach interaction with a multi-touch surface computer as well as how efficiently those users complete a simple task. Inspired by the need for object manipulation in information visualization applications, we asked users to carry out an object sorting task on a physical table, on a tabletop display, and on a desktop computer with a mouse. To compare users' gestures we produced a vocabulary of manipulation techniques that users apply in the physical world and we compare this vocabulary to the set of gestures that users attempted on the surface without training. We find that users who start with the physical model finish the task faster when they move over to using the surface than users who start with the mouse.