Variations in gesturing and speech by GESTYLE
International Journal of Human-Computer Studies - Special issue: Subtle expressivity for characters and robots
Catch me if you can: exploring lying agents in social settings
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Lecture Notes in Computer Science
Levels of representation in the annotation of emotion for the specification of expressivity in ECAs
Lecture Notes in Computer Science
Multimodal expressive embodied conversational agents
Proceedings of the 13th annual ACM international conference on Multimedia
Gesture modeling and animation based on a probabilistic re-creation of speaker style
ACM Transactions on Graphics (TOG)
Methodologies for the User Evaluation of the Motion of Virtual Humans
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Real-time prosody-driven synthesis of body language
ACM SIGGRAPH Asia 2009 papers
ACM SIGGRAPH 2010 papers
Presenting in style by virtual humans
COST 2102'07 Proceedings of the 2007 COST action 2102 international conference on Verbal and nonverbal communication behaviours
Gesture, gaze and persuasive strategies in political discourse
Multimodal corpora
Evaluating the effect of gesture and language on personality perception in conversational agents
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Lessons from research on interaction with virtual environments
Journal of Network and Computer Applications
Modeling naturalistic affective states via facial, vocal, and bodily expressions recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Minimalist approach to show emotions via a flock of smileys
Journal of Network and Computer Applications
Multimodal behavior realization for embodied conversational agents
Multimedia Tools and Applications
Proceedings of the 16th International Conference on 3D Web Technology
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Implementing expressive gesture synthesis for embodied conversational agents
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
Annotating multimodal behaviors occurring during non basic emotions
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
To beat or not to beat: beat gestures in direction giving
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
The control of agents' expressivity in interactive drama
ICVS'05 Proceedings of the Third international conference on Virtual Storytelling: using virtual reality technologies for storytelling
CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Multimodal sensing, interpretation and copying of movements by a virtual agent
PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
An intermediate expressions' generator system in the MPEG-4 framework
VLBV'05 Proceedings of the 9th international conference on Visual Content Processing and Representation
Communicative functions of eye closing behaviours
COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
This paper introduces Gesture Engine, an animation system that synthesizes human gesturing behaviors from augmented conversation transcripts using a database of high-level gesture definitions. An abstract scripting language to specify hand-arm gestures is introduced that incorporates knowledge from sign language research, psycholinguistics, and traditional keyframe animation. A new planning algorithm instantiates and adjusts gestures according to communicative context and temporal constraints obtained from a speech synthesizer. The system animates an MPEG-4 compliant skeleton using Body Animation Parameters.