Hidden Markov Model Inversion for Audio-to-Visual Conversion in an MPEG-4 Facial Animation System
Journal of VLSI Signal Processing Systems
Sonic interaction design: sound, information and experience
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Continuous realtime gesture following and recognition
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
Continuous Stochastic Feature Mapping Based on Trajectory HMMs
IEEE Transactions on Audio, Speech, and Language Processing
The urban musical game: using sport balls as musical interfaces
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Audio/visual mapping with cross-modal hidden Markov models
IEEE Transactions on Multimedia
A multimodal probabilistic model for gesture--based control of sound synthesis
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.00 |
In this paper we address the issue of mapping between gesture and sound in interactive music systems. Our approach, we call mapping by demonstration, aims at learning the mapping from examples provided by users while interacting with the system. We propose a general framework for modeling gesture--sound sequences based on a probabilistic, multimodal and hierarchical model. Two orthogonal modeling aspects are detailed and we describe planned research directions to improve and evaluate the proposed models.