GRASSP: gesturally-realized audio, speech and song performance

  • Authors:
  • Bob Pritchard;Sidney Fels

  • Affiliations:
  • University of British Columbia, Vancouver, B.C. Canada;University of British Columbia, Vancouver, B.C. Canada

  • Venue:
  • NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe the implementation of an environment for Gesturally-Realized Audio, Speech and Song Performance (GRASSP), which includes a glove-based interface, a mapping/training interface, and a collection of Max/MSP/Jitter bpatchers that allow the user to improvise speech, song, sound synthesis, sound processing, sound localization, and video processing. The mapping/training interface provides a framework for performers to specify by example the mapping between gesture and sound or video controls. We demonstrate the effectiveness of the GRASSP environment for gestural control of musical expression by creating a gesture-to-voice system that is currently being used by performers.