Real-time performance controllers for synthesized singing
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
IEEE Transactions on Neural Networks
HandSketch bi-manual controller: investigation on expressive control issues of an augmented tablet
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
Tangible and body-related interaction techniques for a singing voice synthesis installation
Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction
Hi-index | 0.00 |
In this paper, a new voice source model for real-time gesture--controlled voice synthesis is described. The synthesizer is based on a causal-anticausal model of the voice source, a new approach giving accurate control of voice source dimensions like tenseness and effort. Aperiodic components are also considered, resulting in an elaborate model suitable not only for lyrical singing but also for various musical styles playing with voice qualities. The model is also tested using different gestural control interfaces: data glove, keyboard, graphic tablet, pedal board. Depending on parameter-to-interface mappings, several instruments with different musical abilities are designed, taking advantage of the highly expressive possibilities of the synthesis model.