Attention, intentions, and the structure of discourse
Computational Linguistics
Logical foundations of artificial intelligence
Logical foundations of artificial intelligence
Explanation and interaction: the computer generation of explanatory dialogues
Explanation and interaction: the computer generation of explanatory dialogues
Automated discourse generation using discourse structure relations
Artificial Intelligence - Special volume on natural language processing
Planning multimedia explanations using communicative acts
Intelligent multimedia interfaces
Participating in explanatory dialogues: interpreting and responding to questions in context
Participating in explanatory dialogues: interpreting and responding to questions in context
Generating coherent presentations employing textual and visual material
Artificial Intelligence Review - Special issue on integration of natural language and vision processing: intelligent multimedia
Adding animated presentation agents to the interface
Proceedings of the 2nd international conference on Intelligent user interfaces
Computer facial animation
Integrating reactive and scripted behaviors in a life-like presentation agent
AGENTS '98 Proceedings of the second international conference on Autonomous agents
Speech Communication - Special issue on auditory-visual speech processing
Where to look? Automating attending behaviors of virtual human characters
Proceedings of the third annual conference on Autonomous Agents
Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Animation control for real-time virtual humans
Communications of the ACM
Emotional meaning and expression in animated faces
Affective interactions
Layered Modular Action Control for Communicative Humanoids
CA '97 Proceedings of the Computer Animation
A structural model of the human face (graphics, animation, object representation)
A structural model of the human face (graphics, animation, object representation)
Embodied contextual agent in information delivering application
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Implementation of a scripting language for VRML/X3D-based embodied agents
Web3D '03 Proceedings of the eighth international conference on 3D Web technology
The Lexicon and the Alphabet of Gesture, Gaze, and Touch
IVA '01 Proceedings of the Third International Workshop on Intelligent Virtual Agents
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
The Virtual Human Interface: A Photorealistic Digital Human
IEEE Computer Graphics and Applications
Mood swings: expressive speech animation
ACM Transactions on Graphics (TOG)
A model of attention and interest using Gaze behavior
Lecture Notes in Computer Science
Levels of representation in the annotation of emotion for the specification of expressivity in ECAs
Lecture Notes in Computer Science
MyTutor: a personal tutoring agent
Lecture Notes in Computer Science
Multimodal expressive embodied conversational agents
Proceedings of the 13th annual ACM international conference on Multimedia
The blind men and the elephant revisited
From brows to trust
Integrated service provision for student support
AIC'05 Proceedings of the 5th WSEAS International Conference on Applied Informatics and Communications
Providing expressive gaze to virtual animated characters in interactive applications
Computers in Entertainment (CIE) - SPECIAL ISSUE: Media Arts
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Effectiveness and usability of an online help agent embodied as a talking head
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Evaluating Perception of Interaction Initiation in Virtual Environments using Humanoid Agents
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Enhancements to Online Help: Adaptivity and Embodied Conversational Agents
UAHCI '09 Proceedings of the 5th International on ConferenceUniversal Access in Human-Computer Interaction. Part II: Intelligent and Ubiquitous Interaction Environments
Behavior planning for a reflexive agent
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
ACM Transactions on Applied Perception (TAP)
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence
Audio-visual prosody: perception, detection, and synthesis of prominence
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
Affective intelligence: a novel user interface paradigm
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
The embodied morphemes of gaze
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
Emotional eye movement generation based on Geneva Emotion Wheel for virtual agents
Journal of Visual Languages and Computing
A head-eye coordination model for animating gaze shifts of virtual characters
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Intelligent virtual humans with autonomy and personality: State-of-the-art
Intelligent Decision Technologies
Hi-index | 0.00 |
Our goal is to create an ‘intelligent’ 3D agent able to send complex, ‘natural’ messages to users and, in the future, to converse with them. We look at the relationship between the agent's communicative intentions and the way that these intentions are expressed into verbal and nonverbal messages. In this paper, we concentrate on the study and generation of coordinated linguistic and gaze communicative acts. In this view we analyse gaze signals according to their functional meaning rather than to their physical actions. We propose a formalism where a communicative act is represented by two elements: a meaning (that corresponds to a set of goals and beliefs that the agent has the purpose to transmit to the interlocutor) and a signal, that is the nonverbal expression of that meaning. We also outline a methodology to generate messages that coordinate verbal with nonverbal signals.