Toward compound navigation tasks on mobiles via spatial manipulation

  • Authors:
  • Michel Pahud;Ken Hinckley;Shamsi Iqbal;Abigail Sellen;Bill Buxton

  • Affiliations:
  • Microsoft Research, Redmond, Washington, United States;Microsoft Research, Redmond, Washington, United States;Microsoft Research, Redmond, Washington, United States;Microsoft Research, Cambridge, United Kingdom;Microsoft Research, Redmond, Washington, United States

  • Venue:
  • Proceedings of the 15th international conference on Human-computer interaction with mobile devices and services
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We contrast the Chameleon Lens, which uses 3D movement of a mobile device held in the nonpreferred hand to support panning and zooming, with the Pinch-Flick-Drag metaphor of directly manipulating the view using multi-touch gestures. Lens-like approaches have significant potential because they can support navigation-selection, navigation-annotation, and other such compound tasks by off-loading navigation to the nonpreferred hand while the preferred hand annotates, marks a location, or draws a path on the screen. Our experimental results show that the Chameleon Lens is significantly slower than Pinch-Flick-Drag for the navigation subtask in isolation. But our studies also reveal that for navigation between a few known targets the lens performs significantly faster, that differences between the Chameleon Lens and Pinch-Flick-Drag rapidly diminish as users gain experience, and that in the context of a compound navigation-annotation task, the lens performs as well as Pinch-Flick-Drag despite its deficit for the navigation subtask itself.