Sensor design and interaction techniques for gestural input to smart glasses and mobile devices

  • Authors:
  • Andrea Colaço

  • Affiliations:
  • Massachusetts Institute of Technology, Cambridge, MA, USA

  • Venue:
  • Proceedings of the adjunct publication of the 26th annual ACM symposium on User interface software and technology
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Touchscreen interfaces for small display devices have several limitations: the act of touching the screen occludes the display, interface elements like keyboards consume precious display real estate, and even simple tasks like document navigation - which the user performs effortlessly using a mouse and keyboard - require repeated actions like pinch-and-zoom with touch input. More recently, smart glasses with limited or no touch input are starting to emerge commercially. However, the primary input to these systems has been voice. In this paper, we explore the space around the device as a means of touchless gestural input to devices with small or no displays. Capturing gestural input in the surrounding volume requires sensing the human hand. To achieve gestural input we have built Mime [3] -- a compact, low-power 3D sensor for short-range gestural control of small display devices. Our sensor is based on a novel signal processing pipeline and is built using standard off-the-shelf components. Using Mime we demonstrated a variety of application scenarios including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. In my thesis, I will continue to extend sensor capabilities to support new interaction styles.