Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Hidden Markov Model Inversion for Audio-to-Visual Conversion in an MPEG-4 Facial Animation System
Journal of VLSI Signal Processing Systems
Problems and Prospects for Intimate Musical Control of Computers
Computer Music Journal
Sonic interaction design: sound, information and experience
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Human model evaluation in interactive supervised learning
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Continuous Stochastic Feature Mapping Based on Trajectory HMMs
IEEE Transactions on Audio, Speech, and Language Processing
The urban musical game: using sport balls as musical interfaces
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Audio/visual mapping with cross-modal hidden Markov models
IEEE Transactions on Multimedia
Gesture--sound mapping by demonstration in interactive music systems
Proceedings of the 21st ACM international conference on Multimedia
Gesture-based control of physical modeling sound synthesis: a mapping-by-demonstration approach
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.00 |
In this paper, we propose a multimodal approach to create the mapping between gesture and sound in interactive music systems. Specifically, we propose to use a multimodal HMM to conjointly model the gesture and sound parameters. Our approach is compatible with a learning method that allows users to define the gesture--sound relationships interactively. We describe an implementation of this method for the control of physical modeling sound synthesis. Our model is promising to capture expressive gesture variations while guaranteeing a consistent relationship between gesture and sound.