Automatic essay grading using text categorization techniques
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Text Categorization with Suport Vector Machines: Learning with Many Relevant Features
ECML '98 Proceedings of the 10th European Conference on Machine Learning
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Chunking with support vector machines
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Automatic error detection in the Japanese learners' English spoken data
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2
Hi-index | 0.00 |
Although pursuing accuracy is important in language learning or teaching, knowing what types of errors interfere with communication and what types do not would be more beneficial for efficiently enhancing communicative competence. Language learners could be greatly helped by a system that detected errors in learner language and automatically measured their effect on intelligibility. In this paper, we reported our attempt, based on machine learning, to measure the intelligibility of learner language. In the learning process, the system referred to the BLEU and NIST scores between the learners’ original sentences and their back translation (or corrected sentences), the log-probability of the parse, sentence length, and error types (manually or automatically assigned) as a key feature. We found that the system can distinguish between intelligible sentences and others (unnatural and unintelligible) rather successfully, but still has a lot of difficulties in distinguishing the three levels of intelligibility.