Centering: a framework for modeling the local coherence of discourse
Computational Linguistics
Empirically designing and evaluating a new revision-based model for summary generation
Artificial Intelligence - Special volume on empirical methods
Using Grice's maxim of quantity to select the content of plan descriptions
Artificial Intelligence
Developing and empirically evaluating robust explanation generators: the KNIGHT experiments
Computational Linguistics
Introduction to the special issue on natural language generation
Computational Linguistics - Special issue on natural language generation
Collaborative response generation in planning dialogues
Computational Linguistics - Special issue on natural language generation
A strategy for generating evaluative arguments
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
Toward an information visualization workspace: combining multiple means of expression
Human-Computer Interaction
GEA: A Complete, Modular System for Generating Evaluative Arguments
ICCS '01 Proceedings of the International Conference on Computational Sciences-Part I
A strategy for generating evaluative arguments
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
An empirical study of the influence of user tailoring on evaluative argument effectiveness
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
We present an evaluation framework in which the effectiveness of evaluative arguments can be measured with real users. The framework is based on the task-efficacy evaluation method. An evaluative argument is presented in the context of a decision task and measures related to its effectiveness are assessed. Within this framework, we are currently running a formal experiment to verify whether argument effectiveness can be increased by tailoring the argument to the user and by varying the degree of argument conciseness.