The media equation: how people treat computers, television, and new media like real people and places
Improvising linguistic style: social and affective bases for agent personality
AGENTS '97 Proceedings of the first international conference on Autonomous agents
Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
The EMOTE model for effort and shape
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Emotion and personality in a conversational agent
Embodied conversational agents
Truth is beauty: researching embodied conversational agents
Embodied conversational agents
Social role awareness in animated agents
Proceedings of the fifth international conference on Autonomous agents
Tears and fears: modeling emotions and emotional behaviors in synthetic agents
Proceedings of the fifth international conference on Autonomous agents
FacEMOTE: qualitative parametric modifiers for facial animations
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Constraint-Based Facial Animation
Constraints
Real Time Responsive Animation with Personality
IEEE Transactions on Visualization and Computer Graphics
Formational Parameters and Adaptive Prototype Instantiation for MPEG-4 Compliant Gesture Synthesis
CA '02 Proceedings of the Computer Animation
CharToon 2.0 manual
Life-Like Characters: Tools, Affective Functions, and Applications (Cognitive Technologies)
Life-Like Characters: Tools, Affective Functions, and Applications (Cognitive Technologies)
Agent Culture: Human-Agent Interaction in a Multicultural World
Agent Culture: Human-Agent Interaction in a Multicultural World
Style translation for human motion
ACM SIGGRAPH 2005 Papers
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
GNetIc --- Using Bayesian Decision Networks for Iconic Gesture Generation
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Audiovisual alignment in a face-to-face conversation translation framework
BioID_MultiComm'09 Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communication
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Generating culture-specific gestures for virtual agent dialogs
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Guest Editorial: Gesture and speech in interaction: An overview
Speech Communication
Hi-index | 0.00 |
The paper addresses the issue of making Virtual Humans unique and typical of some (social or ethnical) group, by endowing them with style. First a conceptual framework of defining style is discussed, identifying how style is manifested in speech and nonverbal communication. Then the GESTYLE language is introduced, making it possible to define the style of a VH in terms of Style Dictionaries, assigning non-deterministic choices to express certain meanings by nonverbal signals and speech. It is possible to define multiple sources of style and maintain conflicts and dynamical changes. GESTYLE is a text markup language which makes it possible to generate speech and accompanying facial expressions and hand gestures automatically, by declaring the style of the VH and using meaning tags in the text. GESTYLE can be coupled with different low-level TTS and animation engines.