Current directions in computer music research
A survey of classic synthesis techniques in Csound
The Csound book
Construction and Evaluation of a Robust Multifeature Speech/Music Discriminator
ICASSP '97 Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97)-Volume 2 - Volume 2
NIME '02 Proceedings of the 2002 conference on New interfaces for musical expression
Organised Sound
Pointing fingers: using multiple direct interactions with visual objects to perform music
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
On the choice of mappings based on geometric properties
NIME '04 Proceedings of the 2004 conference on New interfaces for musical expression
Dynamic Independent Mapping Layers for Concurrent Control of Audio and Video Synthesis
Computer Music Journal
Towards a catalog and software library of mapping methods
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Towards a coherent terminology and model of instrument description and design
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Ashitaka: an audiovisual instrument
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
Creating new interfaces for musical expression: introduction to NIME
ACM SIGGRAPH 2009 Courses
Advances in new interfaces for musical expression
ACM SIGGRAPH 2011 Courses
Advances in new interfaces for musical expression
SIGGRAPH Asia 2012 Courses
Creating new interfaces for musical expression
SIGGRAPH Asia 2013 Courses
Hi-index | 0.00 |
This paper is about mapping strategies between gesture data and synthesis model parameters by means of perceptual spaces. We define three layers in the mapping chain: from gesture data to gesture perceptual space, from sound perceptual space to synthesis model parameters, and between the two perceptual spaces. This approach makes the implementation highly modular. Both perceptual spaces are developed and depicted with their features. To get a simple mapping between the gesture perceptual subspace and the sound perceptual subspace, we need to focus our attention on the two other mappings. We explain the mapping types: explicit/implicit, static/dynamic. We also present the technical and aesthetical limits introduced by mapping. Some practical examples are given of the use of perceptual spaces in experiments done at LMA in a musical context. Finally, we discuss several implications of the mapping strategies: the influence of chosen mapping limits onto performers' virtuosity, and the incidence of mapping on the learning process with virtual instruments and on improvisation possibilities.