ACM Transactions on Mathematical Software (TOMS)
Automated scoring using a hybrid feature identification technique
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
A Multilingual Application for Automated Essay Scoring
IBERAMIA '08 Proceedings of the 11th Ibero-American conference on AI: Advances in Artificial Intelligence
Detection of non-native sentences using machine-translated training data
NAACL-Short '07 Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers
Analysis of students' learning activities through quantifying time-series comments
KES'11 Proceedings of the 15th international conference on Knowledge-based and intelligent information and engineering systems - Volume Part II
Measuring the use of factual information in test-taker essays
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP
Hi-index | 0.00 |
We have developed an automated Japanese essay scoring system called Jess. The system needs expert writings rather than expert raters to build the evaluation model. By detecting statistical outliers of predetermined aimed essay features compared with many professional writings for each prompt, our system can evaluate essays. The following three features are examined: (1) rhetoric --- syntactic variety, or the use of various structures in the arrangement of phases, clauses, and sentences, (2) organization --- characteristics associated with the orderly presentation of ideas, such as rhetorical features and linguistic cues, and (3) content --- vocabulary related to the topic, such as relevant information and precise or specialized vocabulary. The final evaluation score is calculated by deducting from a perfect score assigned by a learning process using editorials and columns from the Mainichi Daily News newspaper. A diagnosis for the essay is also given.