Measuring intelligibility of japanese learner english

  • Authors:
  • Emi Izumi;Kiyotaka Uchimoto;Hitoshi Isahara

  • Affiliations:
  • Computational Linguistics Group, National Institute of Information and Communications Technology, Kyoto, Japan;Computational Linguistics Group, National Institute of Information and Communications Technology, Kyoto, Japan;Computational Linguistics Group, National Institute of Information and Communications Technology, Kyoto, Japan

  • Venue:
  • FinTAL'06 Proceedings of the 5th international conference on Advances in Natural Language Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although pursuing accuracy is important in language learning or teaching, knowing what types of errors interfere with communication and what types do not would be more beneficial for efficiently enhancing communicative competence. Language learners could be greatly helped by a system that detected errors in learner language and automatically measured their effect on intelligibility. In this paper, we reported our attempt, based on machine learning, to measure the intelligibility of learner language. In the learning process, the system referred to the BLEU and NIST scores between the learners’ original sentences and their back translation (or corrected sentences), the log-probability of the parse, sentence length, and error types (manually or automatically assigned) as a key feature. We found that the system can distinguish between intelligible sentences and others (unnatural and unintelligible) rather successfully, but still has a lot of difficulties in distinguishing the three levels of intelligibility.