Participating in explanatory dialogues: interpreting and responding to questions in context
Participating in explanatory dialogues: interpreting and responding to questions in context
Designing Web Usability: The Practice of Simplicity
Designing Web Usability: The Practice of Simplicity
Evaluating Natural Language Processing Systems: An Analysis and Review
Evaluating Natural Language Processing Systems: An Analysis and Review
Empirically estimating order constraints for content planning in generation
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Evaluation metrics for generation
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
Inferring strategies for sentence ordering in multidocument news summarization
Journal of Artificial Intelligence Research
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
Although there is an increasing shift towards evaluating Natural Language Generation (NLG) systems, there are still many NLG-specific open issues that hinder effective comparative and quantitative evaluation in this field. The paper starts off by describing a task-based, i.e., black-box evaluation of a hypertext NLG system. Then we examine the problem of glass-box, i.e., module specific, evaluation in language generation, with focus on evaluating machine learning methods for text planning.