Building an automated English sentence evaluation system for students learning English as a second language

  • Authors:
  • Kong Joo Lee;Yong-Seok Choi;Jee Eun Kim

  • Affiliations:
  • Department of Information & Communication Engineering, ChungNam National University, 220 Gung-Dong, Yuseong-Gu, Daejeon 305-764, Republic of Korea;Korea Research Institute of Standards and Science, 1 Doryong-Dong, Yuseong-Gu, Daejeon 305-340, Republic of Korea;Department of English Linguistics, Hankuk University of Foreign Studies, 270 Imun-Dong, Dongdaemun-Gu, Seoul 130-791, Republic of Korea

  • Venue:
  • Computer Speech and Language
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an automated scoring system which grades students' English writing tests. The system provides a score and diagnostic feedback to students without human's efforts. Target users are Korean students in junior high schools who learn English as a second language. The system takes a single English sentence as its input. Dealing with a single sentence as an input has some advantages on comparing the input with the answers given by human teachers and giving detailed feedback to the students. The system was developed and tested with the real test data collected through English tests given to third grade students in junior high school. Scoring requires two steps of the process. The first process is analyzing the input sentence in order to detect possible errors, such as spelling errors and syntactic errors. The second process is comparing the input sentence with given answers to identify the differences as errors. To evaluate the performance of the system, the output produced by the system is compared with the result provided by human raters. The score agreement value between a human rater and the system is quite close to the value between two human raters.