HLT '89 Proceedings of the workshop on Speech and Natural Language
The use of commercial natural language interface in the ATIS task
HLT '91 Proceedings of the workshop on Speech and Natural Language
A proposal for incremental dialogue evaluation
HLT '91 Proceedings of the workshop on Speech and Natural Language
A fully statistical approach to natural language interfaces
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
Multi-site data collection for a spoken language corpus
HLT '91 Proceedings of the workshop on Speech and Natural Language
Experiments in evaluating interactive spoken language systems
HLT '91 Proceedings of the workshop on Speech and Natural Language
Subject-based evaluation measures for interactive spoken language systems
HLT '91 Proceedings of the workshop on Speech and Natural Language
Multi-site data collection and evaluation in spoken language understanding
HLT '93 Proceedings of the workshop on Human Language Technology
Test and evaluation of a spoken dialogue system
ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: speech processing - Volume II
Hi-index | 0.00 |
There has been a long-standing methodology for evaluating work in speech recognition (SR), but until recently no community-wide methodology existed for either natural language (NL) researchers or speech understanding (SU) researchers for evaluating the systems they developed.Recently considerable progress has been made by a number of groups involved in the DARPA Spoken Language Systems (SLS) program to agree on a methodology for comparative evaluation of SLS systems, and that methodology is being used in practice for the first time.This paper gives an overview of the process that was followed in creating a meaningful evaluation mechanism, describes the current mechanism, and presents some directions for future development.