Experiments in automatic assessment using basic information retrieval techniques

  • Authors:
  • Md Maruf Hasan

  • Affiliations:
  • School of Technology, Shinawatra University, Thailand

  • Venue:
  • KICSS'10 Proceedings of the 5th international conference on Knowledge, information, and creativity support systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

In Information Retrieval (IR), the similarity scores between a query and a set of documents are calculated, and the relevant documents are ranked based on their similarity scores. IR systems often consider queries as short documents containing only a few words in calculating document similarity score. In Computer Aided Assessment (CAA) of narrative answers, when model answers are available, the similarity score between Students' Answers and the respective Model Answer may be a good quality-indicator. With such an analogy in mind, we applied basic IR techniques in the context of automatic assessment and discussed our findings. In this paper, we explain the development of a web-based automatic assessment system that incorporates 5 different text analysis techniques for automatic assessment of narrative answers using vector space framework. The experimental results based on 30 narrative questions with 30 model answers, and 300 student's answers (from 10 students) show that the correlation of automatic assessment with human assessment is higher when advanced text processing techniques such as Keyphrase Extraction and Synonym Resolution are applied.