Automatic essay grading using text categorization techniques
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
KEA: practical automatic keyphrase extraction
Proceedings of the fourth ACM conference on Digital libraries
A vector space model for automatic indexing
Communications of the ACM
A hybrid text classification approach for analysis of student essays
HLT-NAACL-EDUC '03 Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing - Volume 2
Domain-specific keyphrase extraction
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.01 |
In Information Retrieval (IR), the similarity scores between a query and a set of documents are calculated, and the relevant documents are ranked based on their similarity scores. IR systems often consider queries as short documents containing only a few words in calculating document similarity score. In Computer Aided Assessment (CAA) of narrative answers, when model answers are available, the similarity score between Students' Answers and the respective Model Answer may be a good quality-indicator. With such an analogy in mind, we applied basic IR techniques in the context of automatic assessment and discussed our findings. In this paper, we explain the development of a web-based automatic assessment system that incorporates 5 different text analysis techniques for automatic assessment of narrative answers using vector space framework. The experimental results based on 30 narrative questions with 30 model answers, and 300 student's answers (from 10 students) show that the correlation of automatic assessment with human assessment is higher when advanced text processing techniques such as Keyphrase Extraction and Synonym Resolution are applied.