Bootstrapping multiple-choice tests with THE-MENTOR

  • Authors:
  • Ana Cristina Mendes;Sérgio Curto;Luísa Coheur

  • Affiliations:
  • Spoken Language Systems Laboratory, L2F/INESC-ID, Instituto Superior Técnico, Technical University of Lisbon, Lisboa, Portugal;Spoken Language Systems Laboratory, L2F/INESC-ID, Instituto Superior Técnico, Technical University of Lisbon, Lisboa, Portugal;Spoken Language Systems Laboratory, L2F/INESC-ID, Instituto Superior Técnico, Technical University of Lisbon, Lisboa, Portugal

  • Venue:
  • CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is very likely that, at least once in their lifetime, everyone has answered a multiple-choice test. Multiple-choice tests are considered an effective technique for knowledge assessment, requiring a short response time and with the possibility of covering a broad set of topics. Nevertheless, when it comes to their creation, it can be a time-consuming and labour-intensive task. Here, the generation of multiple-choice tests aided by computer can reduce these drawbacks: to the human assessor is attributed the final task of approving or rejecting the generated test items, depending on their quality. In this paper we present THE-MENTOR, a system that employs a fully automatic approach to generate multiple-choice tests. In a first offline step, a set of lexico-syntactic patterns are bootstrapped by using several question/answer seed pairs and leveraging the redundancy of theWeb. Afterwards, in an online step, the patterns are used to select sentences in a text document from which answers can be extracted and the respective questions built. In the end, several filters are applied to discard low quality items and distractors are named entities that comply with the question category, extracted from the same text.