Automatic scoring of short handwritten essays in reading comprehension tests
Artificial Intelligence
The Knowledge Engineering Review
Sentence correction incorporating relative position and parse template language models
IEEE Transactions on Audio, Speech, and Language Processing
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
We have developed on automated Japanese essay scoring system named jess.The system evaluates an essay from three features: (1) Rhetoric - ease of reading, diversity of vocabulary, percentage of big words (long, difficult words), and percentage of passive sentences, (2) Organization - characteristics associated with the orderly presentation of ideas, such as rhetorical features and linguistic cues, (3) Contents - vocabulary related to the topic, such as relevant information and precise or specialized vocabulary.The final evaluated score is calculated by deducting from a perfect score assigned by a learning process using editorials and columns from the Mainichi Daily News newspaper. A diagnosis for the essay is also given.Our system does not need any essays graded by human experts.