Graph-based generation of referring expressions
Computational Linguistics
Intrinsic vs. extrinsic evaluation measures for referring expression generation
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
Trainable speaker-based referring expression generation
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
Evaluating algorithms for the generation of referring expressions using a balanced corpus
ENLG '07 Proceedings of the Eleventh European Workshop on Natural Language Generation
The TUNA-REG Challenge 2009: overview and evaluation results
ENLG '09 Proceedings of the 12th European Workshop on Natural Language Generation
Building a semantically transparent corpus for the generation of referring expressions
INLG '06 Proceedings of the Fourth International Natural Language Generation Conference
The TUNA challenge 2008: overview and evaluation results
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
NIL-UCM: most-frequent-value-first attribute selection and best-scoring-choice realization
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
GRAPH: the costs of redundancy in referring expressions
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
Cross-linguistic attribute selection for REG: comparing Dutch and English
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Computational generation of referring expressions: A survey
Computational Linguistics
Learning preferences for referring expression generation: effects of domain, language and algorithm
INLG '12 Proceedings of the Seventh International Natural Language Generation Conference
Hi-index | 0.00 |
In this paper we investigate how much data is required to train an algorithm for attribute selection, a subtask of Referring Expressions Generation (REG). To enable comparison between different-sized training sets, a systematic training method was developed. The results show that depending on the complexity of the domain, training on 10 to 20 items may already lead to a good performance.