Cultural differences explaining the differences in results in GSS: implications for the next decade
Decision Support Systems - Special issue: Decision support systems: Directions for the next decade
Multilingual communication in electronic meetings
ACM SIGGROUP Bulletin
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Language Grid: An Infrastructure for Intercultural Collaboration
SAINT '06 Proceedings of the International Symposium on Applications on Internet
Effects of Repair Support Agent for Accurate Multilingual Communication
PRICAI '08 Proceedings of the 10th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
Effects of undertaking translation repair using back translation
Proceedings of the 2009 international workshop on Intercultural collaboration
Usability of multilingual communication tools
UI-HCII'07 Proceedings of the 2nd international conference on Usability and internationalization
Hi-index | 0.00 |
The accuracy of machine translation affects how well people understand each other when communicating. Translation repair can improve the accuracy of translated sentences. Translation repair is typically only used when a user thinks that his/her message is inaccurate. As a result, translation accuracy suffers, because people's judgment in this regard is not always accurate. In order to solve this problem, we propose a method that provides users with an indication of the translation accuracy of their message. In this method, we measure the accuracy of translated sentences using an automatic evaluation method, providing users with three indicators: a percentage, a five-point scale, and a three-point scale. We verified how well these indicators reduce inaccurate judgments, and concluded the following: (1) the indicators did not significantly affect the inaccurate judgments of users; (2) the indication using a five-point scale obtained the highest evaluation, and that using a percentage obtained the second highest evaluation. However, in this experiment, the values we obtained from automatically evaluating translations were not always accurate. We think that incorrect automatic-evaluated values may have led to some inaccurate judgments. If we improve the accuracy of an automatic evaluation method, we believe that the indicators of translation accuracy can reduce inaccurate judgments. In addition, the percentage indicator can compensate for the shortcomings of the five-point scale. In other words, we believe that users may judge translation accuracy more easily by using a combination of these indicators.