Attention, intentions, and the structure of discourse
Computational Linguistics
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Computational Linguistics
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
User Modeling and User-Adapted Interaction
An information structural approach to spoken language generation
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
Coordination and context-dependence in the generation of embodied conversation
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
Visual and linguistic information in gesture classification
Proceedings of the 6th international conference on Multimodal interfaces
Describing and generating multimodal contents featuring affective lifelike agents with MPML
New Generation Computing
Cards-to-presentation on the web: generating multimedia contents featuring agent animations
Journal of Network and Computer Applications - Special issue: Innovations in agent collaboration
Visual and linguistic information in gesture classification
ACM SIGGRAPH 2006 Courses
Visual and linguistic information in gesture classification
ACM SIGGRAPH 2007 courses
The design of a generic framework for integrating ECA components
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Towards a Multicultural ECA Tour Guide System
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
A Script Driven Multimodal Embodied Conversational Agent Based on a Generic Framework
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
A Quiz Game Console Based on a Generic Embodied Conversational Agent Framework
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Integrating embodied conversational agent components with a generic framework
Multiagent and Grid Systems - Innovations in intelligent agent technology
Estimating user's engagement from eye-gaze behaviors in human-agent conversations
Proceedings of the 15th international conference on Intelligent user interfaces
Toward a universal platform for integrating embodied conversational agent components
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
AEGS'11 Proceedings of the 2011 international conference on Agents for Educational Games and Simulations
Gaze awareness in conversational agents: Estimating a user's conversational engagement from eye gaze
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on interaction with smart objects, Special section on eye gaze and conversation
Hi-index | 0.00 |
This paper proposes a method for assigning gestures to text based on lexical and syntactic information. First, our empirical study identified lexical and syntactic information strongly correlated with gesture occurrence and suggested that syntactic structure is more useful for judging gesture occurrence than local syntactic cues. Based on the empirical results, we have implemented a system that converts text into an animated agent that gestures and speaks synchronously.