Text generation: using discourse strategies and focus constraints to generate natural language text
Text generation: using discourse strategies and focus constraints to generate natural language text
Tailoring object descriptions to a user's level of expertise
Computational Linguistics - Special issue on user modeling
User models in dialog systems
Measuring usability: preference vs. performance
Communications of the ACM
The media equation: how people treat computers, television, and new media like real people and places
Ten myths of multimodal interaction
Communications of the ACM
Building natural language generation systems
Building natural language generation systems
Performative facial expressions in animated faces
Embodied conversational agents
Truth is beauty: researching embodied conversational agents
Embodied conversational agents
Natural Language Processing and User Modeling: Synergies and Limitations
User Modeling and User-Adapted Interaction
Computer Animation and Virtual Worlds
Catch me if you can: exploring lying agents in social settings
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
From brows to trust
Unit selection in a concatenative speech synthesis system using a large speech database
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 01
Evaluating information presentation strategies for spoken recommendations
Proceedings of the 2007 ACM conference on Recommender systems
Gesture modeling and animation based on a probabilistic re-creation of speaker style
ACM Transactions on Graphics (TOG)
Generating Embodied Descriptions Tailored to User Preferences
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
That's nice... what can you do with it?
Computational Linguistics
Comparing rule-based and data-driven selection of facial displays
EmbodiedNLP '07 Proceedings of the Workshop on Embodied Language Processing
Techniques for text planning with XSLT
NLPXML '04 Proceeedings of the Workshop on NLP and XML (NLPXML-2004): RDF/RDFS and OWL in Language Technology
Generating and evaluating evaluative arguments
Artificial Intelligence
Validating the web-based evaluation of NLG systems
ACLShort '09 Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
CCG chart realization from disjunctive inputs
INLG '06 Proceedings of the Fourth International Natural Language Generation Conference
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
Scrutable adaptation: because we can and must
AH'06 Proceedings of the 4th international conference on Adaptive Hypermedia and Adaptive Web-Based Systems
Evaluating evaluation methods for generation in the presence of variation
CICLing'05 Proceedings of the 6th international conference on Computational Linguistics and Intelligent Text Processing
Form-Oriented annotation for building a functionally independent dictionary of synthetic movement
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
Hi-index | 0.00 |
Tailoring the linguistic content of automatically generated descriptions to the preferences of a target user has been well demonstrated to be an effective way to produce higher-quality output that may even have a greater impact on user behaviour. It is known that the non-verbal behaviour of an embodied agent can have a significant effect on users' responses to content presented by that agent. However, to date no-one has examined the contribution of non-verbal behaviour to the effectiveness of user tailoring in automatically generated embodied output. We describe a series of experiments designed to address this question. We begin by introducing a multimodal dialogue system designed to generate descriptions and comparisons tailored to user preferences, and demonstrate that the user-preference tailoring is detectable to an overhearer when the output is presented as synthesised speech. We then present a multimodal corpus consisting of the annotated facial expressions used by a speaker to accompany the generated tailored descriptions, and verify that the most characteristic positive and negative expressions used by that speaker are identifiable when resynthesised on an artificial talking head. Finally, we combine the corpus-derived facial displays with the tailored descriptions to test whether the addition of the non-verbal channel improves users' ability to detect the intended tailoring, comparing two strategies for selecting the displays: one based on a simple corpus-derived rule, and one making direct use of the full corpus data. The performance of the subjects who saw displays selected by the rule-based strategy was not significantly different than that of the subjects who got only the linguistic content, while the subjects who saw the data-driven displays were significantly worse at detecting the correctly tailored output. We propose a possible explanation for this result, and also make recommendations for developers of future systems that may make use of an embodied agent to present user-tailored content.