Procedure for quantitatively comparing the syntactic coverage of English grammars
HLT '91 Proceedings of the workshop on Speech and Natural Language
Benchmark tests for the DARPA Spoken Language Program
HLT '93 Proceedings of the workshop on Human Language Technology
Multi-site data collection and evaluation in spoken language understanding
HLT '93 Proceedings of the workshop on Human Language Technology
Survey of the Message Understanding Conferences
HLT '93 Proceedings of the workshop on Human Language Technology
Towards better NLP system evaluation
HLT '94 Proceedings of the workshop on Human Language Technology
Automatic evaluation of computer generated text: a progress report on the TextEval project
HLT '94 Proceedings of the workshop on Human Language Technology
The Penn Treebank: annotating predicate argument structure
HLT '94 Proceedings of the workshop on Human Language Technology
Whither written language evaluation?
HLT '94 Proceedings of the workshop on Human Language Technology
Semantic evaluation for spoken-language systems
HLT '94 Proceedings of the workshop on Human Language Technology
Evaluation in the ARPA machine translation program: 1993 methodology
HLT '94 Proceedings of the workshop on Human Language Technology
Overview of the second text retrieval conference (TREC-2)
HLT '94 Proceedings of the workshop on Human Language Technology
CIG'09 Proceedings of the 5th international conference on Computational Intelligence and Games
Hi-index | 0.00 |
This session focused on experimental or planned approaches to human language technology evaluation and included an overview and five papers: two papers on experimental evaluation approaches[1, 2], and three about the ongoing work in new annotation and evaluation approaches for human language technology[3, 4, 5]. This was followed by fifteen minutes of general discussion.