A multimodal probabilistic model for gesture--based control of sound synthesis

  • Authors:
  • Jules Françoise;Norbert Schnell;Frédéric Bevilacqua

  • Affiliations:
  • IRCAM CNRS UPMC, PARIS, France;IRCAM CNRS UPMC, PARIS, France;IRCAM CNRS UPMC, PARIS, France

  • Venue:
  • Proceedings of the 21st ACM international conference on Multimedia
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a multimodal approach to create the mapping between gesture and sound in interactive music systems. Specifically, we propose to use a multimodal HMM to conjointly model the gesture and sound parameters. Our approach is compatible with a learning method that allows users to define the gesture--sound relationships interactively. We describe an implementation of this method for the control of physical modeling sound synthesis. Our model is promising to capture expressive gesture variations while guaranteeing a consistent relationship between gesture and sound.