Leveraging behavioral models of sounding objects forgesture-controlled sound design

  • Authors:
  • Kristian Gohlke;David Black;Jörn Loviscach

  • Affiliations:
  • University of Applied Sciences Bremen, Bremen, Germany;University of Applied Sciences Bremen, Bremen, Germany;University of Applied Sciences, Bielefeld, Germany

  • Venue:
  • Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sound designers and Foley artists have long struggled to create expressive soundscapes using standard editing software, devoting much time for the calibration of multiple sound samples and parameter adjustments. We present an intuitive approach that exploits the capabilities of off-theshelf motion-sensing input devices to enable quick and fluid interaction with sound to trigger and modulate digital sound generators based on adaptable behavioral models of familiar physical sounding objects. Rather than requiring profound technical knowledge of sound design, the system leverages the user's motor memory and motion skills to mimic generic and familiar interactions with everyday sounding objects. This allows the user to fully focus on the expressive act of sound creation while enjoying a fluent workflow and a satisfying user experience.