What can a mouse cursor tell us more?: correlation of eye/mouse movements on web browsing
CHI '01 Extended Abstracts on Human Factors in Computing Systems
The Effect of Source Analysis on Translation Confidence
AMTA '00 Proceedings of the 4th Conference of the Association for Machine Translation in the Americas on Envisioning Machine Translation in the Information Future
On the structural complexity of natural language sentences
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 2
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Statistical machine translation
ACM Computing Surveys (CSUR)
Collaborative translation by monolinguals with machine translators
Proceedings of the 14th international conference on Intelligent user interfaces
Difficulties in establishing common ground in multiparty groups using machine translation
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Designing Protocols for Collaborative Translation
PRIMA '09 Proceedings of the 12th International Conference on Principles of Practice in Multi-Agent Systems
Fast, cheap, and creative: evaluating translation quality using Amazon's Mechanical Turk
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Translation by iterative collaboration between monolingual users
Proceedings of Graphics Interface 2010
Enabling monolingual translators: post-editing vs. options
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
MonoTrans2: a new human computation system to support monolingual translation
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Translating by post-editing: is it the way forward?
Machine Translation
Collaborative workflow for crowdsourcing translation
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
User see, user point: gaze and cursor alignment in web search
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Deploying monotrans widgets in the wild
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Putting human assessments of machine translation systems in order
WMT '12 Proceedings of the Seventh Workshop on Statistical Machine Translation
Findings of the 2012 workshop on statistical machine translation
WMT '12 Proceedings of the Seventh Workshop on Statistical Machine Translation
Crowdsourcing for grammatical error correction
Proceedings of the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing
Selecting semantically-resonant colors for data visualization
EuroVis '13 Proceedings of the 15th Eurographics Conference on Visualization
Hi-index | 0.01 |
Language translation is slow and expensive, so various forms of machine assistance have been devised. Automatic machine translation systems process text quickly and cheaply, but with quality far below that of skilled human translators. To bridge this quality gap, the translation industry has investigated post-editing, or the manual correction of machine output. We present the first rigorous, controlled analysis of post-editing and find that post-editing leads to reduced time and, surprisingly, improved quality for three diverse language pairs (English to Arabic, French, and German). Our statistical models and visualizations of experimental data indicate that some simple predictors (like source text part of speech counts) predict translation time, and that post-editing results in very different interaction patterns. From these results we distill implications for the design of new language translation interfaces.