Empirically designing and evaluating a new revision-based model for summary generation
Artificial Intelligence - Special volume on empirical methods
Artificial Intelligence
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Trainable methods for surface natural language generation
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
A fast and portable realizer for text generation systems
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
Generation that exploits corpus-based statistical knowledge
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Exploiting a probabilistic hierarchical model for generation
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Wide coverage symbolic surface realization
ACLdemo '04 Proceedings of the ACL 2004 on Interactive poster and demonstration sessions
Robust PCFG-based generation using automatically acquired LFG approximations
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Dependency-based n-gram models for general purpose sentence realisation
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Human evaluation of a German surface realisation ranker
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Stochastic realisation ranking for a free word order language
ENLG '07 Proceedings of the Eleventh European Workshop on Natural Language Generation
Making grammar-based generation easier to deploy in dialogue systems
SIGdial '08 Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue
Practical grammar-based NLG from examples
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Finding common ground: towards a surface realisation shared task
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Human evaluation of a german surface realisation ranker
Empirical methods in natural language generation
Introducing shared tasks to NLG: the TUNA shared task evaluation challenges
Empirical methods in natural language generation
Comparing the performance of two TAG-based surface realisers using controlled grammar traversal
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Evaluating evaluation methods for generation in the presence of variation
CICLing'05 Proceedings of the 6th international conference on Computational Linguistics and Intelligent Text Processing
Dependency-based n-gram models for general purpose sentence realisation
Natural Language Engineering
The first surface realisation shared task: overview and evaluation results
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Error mining on dependency trees
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Generation for grammar engineering
INLG '12 Proceedings of the Seventh International Natural Language Generation Conference
Hi-index | 0.00 |
After many successes, statistical approaches that have been popular in the parsing community are now making headway into Natural Language Generation (NLG). These systems are aimed mainly at surface realization, and promise the same advantages that make statistics valuable for parsing: robustness, wide coverage and domain independence. A recent experiment aimed to empirically verify the linguistic coverage for such a statistical surface realization component by generating transformed sentences from the Penn TreeBank corpus. This article presents the empirical results of a similar experiment to evaluate the coverage of a purely symbolic surface realizer. We present the problems facing a symbolic approach on the same task, describe the results of its evaluation, and contrast them with the results of the statistical method to help quantitatively determine the level of coverage currently obtained by NLG surface realizers.