Recognizing shapes and gestures using sound as feedback

  • Authors:
  • Javier Sanchez

  • Affiliations:
  • Stanford University, Stanford, CA, USA

  • Venue:
  • CHI '10 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The main goal of this research work is to show the possibility of using sound feedback techniques to recognize shapes and gestures. The system is based on the idea of relating spatial representations to sound. The shapes are predefined and the user has no access to any visual information. The user interacts with the system using a universal pointer device, as a mouse or a pen tablet, or the touch screen of a mobile device. While exploring the space using the pointer device, sound is generated, which pitch and intensity vary according to a strategy. Sounds are related to spatial representation, so the user has a sound perception of shapes and gestures. They can be easily followed with the pointer device, using the sound as only reference.