A fuzzy rule-based interactive methodology for training multimedia actors
VIP '00 Selected papers from the Pan-Sydney workshop on Visualisation - Volume 2
Contextual Virtual Interaction as Part of Ubiquitous Game Design and Development
Personal and Ubiquitous Computing
A model of attention and interest using Gaze behavior
Lecture Notes in Computer Science
Eye communication in a conversational 3D synthetic agent
AI Communications
Distinctiveness in multimodal behaviors
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Toward a Unified Catalog of Implemented Cognitive Architectures
Proceedings of the 2010 conference on Biologically Inspired Cognitive Architectures 2010: Proceedings of the First Annual Meeting of the BICA Society
Hi-index | 0.00 |
Face-to-face interaction between people is generally effortless and effective. We exchange glances, take turns speaking and make facial and manual gestures to achieve the goals of the dialogue. This paper describes an action composition and selection architecture for synthetic characters capable of full-duplex, real-time face-to-face interaction with a human. This architecture is part of a computational model of psychosocial dialogue skills, called Ymir, that bridges between multimodal perception and multimodal action generation. To test the architecture, a prototype humanoid has been implemented, named Gandalf, who commands a graphical model of the solar system, and can engage in task-directed dialogue with people using speech, manual and facial gesture. Gandalf has been tested in interaction with users and has been shown capable of fluid turn-taking and multimodal dialogue. The primary focus in this paper will be on the action selection mechanisms and low-level composition of motor commands. An overview is also given of the Ymir model and Gandalf's graphical representation.