The String-to-String Correction Problem
Journal of the ACM (JACM)
An Extension of the String-to-String Correction Problem
Journal of the ACM (JACM)
GPSM: a Generaized Probabilistic Semantic Model for ambiguity resolution
ACL '92 Proceedings of the 30th annual meeting on Association for Computational Linguistics
Semantic and syntactic aspects of score function
COLING '88 Proceedings of the 12th conference on Computational linguistics - Volume 2
COLING '92 Proceedings of the 14th conference on Computational linguistics - Volume 1
Using test suites in evaluation of machine translation systems
COLING '90 Proceedings of the 13th conference on Computational linguistics - Volume 2
Using sentence connectors for evaluating MT output
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 2
Applications of automatic evaluation methods to measuring a capability of speech translation system
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 1
A machine learning approach to the automatic evaluation of machine translation
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Feedback cleaning of machine translation rules using automatic evaluation
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
A report of recent progress in transformation-based error-driven learning
HLT '94 Proceedings of the workshop on Human Language Technology
S2S '02 Proceedings of the ACL-02 workshop on Speech-to-speech translation: algorithms and systems - Volume 7
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
ORANGE: a method for evaluating automatic evaluation metrics for machine translation
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
BLANC: learning evaluation metrics for MT
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
An automatic method for summary evaluation using multiple evaluation results by a manual method
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Automatic measuring of English language proficiency using MT evaluation technology
eLearn '04 Proceedings of the Workshop on eLearning for Computational Linguistics and Computational Linguistics for eLearning
Example-based rescoring of statistical machine translation output
HLT-NAACL-Short '04 Proceedings of HLT-NAACL 2004: Short Papers
Automatic evaluation method for machine translation using noun-phrase chunking
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
HPSG-Based Preprocessing for English-to-Japanese Translation
ACM Transactions on Asian Language Information Processing (TALIP)
Hi-index | 0.00 |
In this paper, an objective quantitative quality measure is proposed to evaluate the performance of machine translation systems. The proposed method is to compare the raw translation output of an MT system with the final revised version for the customers, and then compute the editing efforts required to convert the raw translation to the final version. In contrast to the other proposals, the evaluation process can be done quickly and automatically. Hence, it can provide a quick response on any system change. A system designer can thus quickly find the advantages or faults of a particular performance dynamically. Application of such a measure to improve the system performance on-line on a parameterized and feedback-controlled system will be demonstrated. Furthermore, because the revised version is used directly as a reference, the performance measure can reflect the real quality gap between the system performance and customer expectation. A system designer can thus concentrate on practically important topics rather than on theoretically interesting issues.