Temporal interaction between an artificial orchestra conductor and human musicians
Computers in Entertainment (CIE) - SPECIAL ISSUE: Media Arts (Part II)
Learning a model of speaker head nods using gesture corpora
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
Mood recognition based on upper body posture and movement features
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Hi-index | 0.00 |
In this paper, we present a system that visualizes the expressive quality of a music performance using a virtual head. We provide a mapping through several parameter spaces: on the input side, we have elaborated a mapping between values of acoustic cues and emotion as well as expressivity parameters; on the output side, we propose a mapping between these parameters and the behaviors of the virtual head. This mapping ensures a coherency between the acoustic source and the animation of the virtual head. After presenting some background information on behavior expressivity of humans, we introduce our model of expressivity. We explain how we have elaborated the mapping between the acoustic and the behavior cues. Then, we describe the implementation of a working system that controls the behavior of a human-like head that varies depending on the emotional and acoustic characteristics of the musical execution. Finally, we present the tests we conducted to validate our mapping between the emotive content of the music performance and the expressivity parameters.