Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Émile: Marshalling passions in training and education
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Emotion and personality in a conversational agent
Embodied conversational agents
Creating Interactive Virtual Humans: Some Assembly Required
IEEE Intelligent Systems
ALMA: a layered model of affect
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
AI Magazine - Special issue on achieving human-level AI through integrated systems and research
Are computer-generated emotions and moods plausible to humans?
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Action planning for virtual human performances
ICVS'05 Proceedings of the Third international conference on Virtual Storytelling: using virtual reality technologies for storytelling
Intuition as instinctive dialogue
Computing with instinct
An interdisciplinary VR-architecture for 3D chatting with non-verbal communication
EGVE - JVRC'11 Proceedings of the 17th Eurographics conference on Virtual Environments & Third Joint Virtual Reality
Cognitive Computational Models of Emotions and Affective Behaviors
International Journal of Software Science and Computational Intelligence
The next generation poetic experience
SIGGRAPH Asia 2013 Art Gallery
Hi-index | 0.00 |
Natural multimodal interaction with realistic virtual characters provides rich opportunities for entertainment and education. In this paper we present the current VIRTUALHUMAN demonstrator system. It provides a knowledge-based framework to create interactive applications in a multi-user, multi-agent setting. The behavior of the virtual humans and objects in the 3D environment is controlled by interacting affective conversational dialogue engines. An elaborate model of affective behavior adds natural emotional reactions and presence of the virtual humans. Actions are defined in a XML-based markup language that supports the incremental specification of synchronized multimodal output. The system was successfully demonstrated during CeBIT 2006.