Visual Prosody: Facial Movements Accompanying Speech
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
From brows to trust
Multimodal generation in the COMIC dialogue system
ACLdemo '05 Proceedings of the ACL 2005 on Interactive poster and demonstration sessions
Comparing rule-based and data-driven selection of facial displays
EmbodiedNLP '07 Proceedings of the Workshop on Embodied Language Processing
Building a semantically transparent corpus for the generation of referring expressions
INLG '06 Proceedings of the Fourth International Natural Language Generation Conference
Generating Embodied Descriptions Tailored to User Preferences
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Comparing rule-based and data-driven selection of facial displays
EmbodiedNLP '07 Proceedings of the Workshop on Embodied Language Processing
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
Hi-index | 0.00 |
We present an annotated corpus of conversational facial displays designed to be used for generation. The corpus is based on a recording of a single speaker reading scripted output in the domain of the target generation system. The data in the corpus consists of the syntactic derivation tree of each sentence annotated with the full syntactic and pragmatic context, as well as the eye and eyebrow displays and rigid head motion used by the the speaker. The behaviours of the speaker show several contextual patterns, many of which agree with previous findings on conversational facial displays. The corpus data has been used in several studies exploring different strategies for selecting facial displays for a synthetic talking head.