Building natural language generation systems
Building natural language generation systems
Performative facial expressions in animated faces
Embodied conversational agents
Computer Animation and Virtual Worlds
Non-verbal cues for discourse structure
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Multimodal generation in the COMIC dialogue system
ACLdemo '05 Proceedings of the ACL 2005 on Interactive poster and demonstration sessions
Associating facial displays with syntactic constituents for generation
LAW '07 Proceedings of the Linguistic Annotation Workshop
Associating facial displays with syntactic constituents for generation
LAW '07 Proceedings of the Linguistic Annotation Workshop
User Modeling and User-Adapted Interaction
Hi-index | 0.00 |
The non-verbal behaviour of an embodied conversational agent is normally based on recorded human behaviour. There are two main ways that the mapping from human behaviour to agent behaviour has been implemented. In some systems, human behaviour is analysed, and then rules for the agent are created based on the results of that analysis; in others, the recorded behaviour is used directly as a resource for decision-making, using data-driven techniques. In this paper, we implement both of these methods for selecting the conversational facial displays of an animated talking head and compare them in two user evaluations. In the first study, participants were asked for subjective preferences: they tended to prefer the output of the data-driven strategy, but this trend was not statistically significant. In the second study, the data-driven facial displays affected the ability of users to perceive user-model tailoring in synthesised speech, while the rule-based displays did not have any effect.