SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Visual Prosody: Facial Movements Accompanying Speech
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Expressive speech-driven facial animation
ACM Transactions on Graphics (TOG)
Speech-driven facial animation with realistic dynamics
IEEE Transactions on Multimedia
Towards Natural Head Movement of Autonomous Speaker Agent
KES '08 Proceedings of the 12th international conference on Knowledge-Based Intelligent Information and Engineering Systems, Part II
Towards Realistic Real Time Speech-Driven Facial Animation
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Towards Facial Gestures Generation by Speech Signal Analysis Using HUGE Architecture
Multimodal Signals: Cognitive and Algorithmic Issues
Personalized expressive embodied conversational agent EVA
VIS '10 Proceedings of the 3rd WSEAS international conference on Visualization, imaging and simulation
Multimodal behavior realization for embodied conversational agents
Multimedia Tools and Applications
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Towards ECA's animation of expressive complex behaviour
COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment
Hi-index | 0.00 |
We introduce a universal architecture for statistically based HUman GEsturing (HUGE) system, for producing and using statistical models for facial gestures based on any kind of inducement. As inducement we consider any kind of signal that occurs in parallel to the production of gestures in human behaviour and that may have a statistical correlation with the occurrence of gestures, e.g. text that is spoken, audio signal of speech, bio signals etc. The correlation between the inducement signal and the gestures is used to first build the statistical model of gestures based on a training corpus consisting of sequences of gestures and corresponding inducement data sequences. In the runtime phase, the raw, previously unknown inducement data is used to trigger (induce) the real time gestures of the agent based on the previously constructed statistical model. We present the general architecture and implementation issues of our system, and further clarify it through two case studies. We believe that this universal architecture is useful for experimenting with various kinds of potential inducement signals and their features and exploring the correlation of such signals or features with the gesturing behaviour.