On the impact of adaptive test question selection for learning efficiency

  • Authors:
  • Michal Barla;Mária Bieliková;Anna Bou Ezzeddinne;Tomáš Kramár;Marián Šimko;Oto Vozár

  • Affiliations:
  • Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 3, 842 16 Bratislava, Slovakia;Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 3, 842 16 Bratislava, Slovakia;Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 3, 842 16 Bratislava, Slovakia;Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 3, 842 16 Bratislava, Slovakia;Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 3, 842 16 Bratislava, Slovakia;Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 3, 842 16 Bratislava, Slovakia

  • Venue:
  • Computers & Education
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present a method for adaptive selection of test questions according to the individual needs of students within a web-based educational system. It functions as a combination of three particular methods. The first method is based on the course structure and focuses on the selection of the most appropriate topic for learning. The second uses Item Response Theory to select the k-best questions with adequate difficulty for a particular learner. The last is based on the usage history and prioritizes questions according to specific strategies, e.g. to filter out the questions that were recently asked. We describe how these methods evaluate user answers to gather information concerning their characteristics for a more precise selection of further questions. We describe an evaluation of the impact of a proposed method through two different types of experiments in the domain of learning programming, which both showed that our method for adaptive test question selection increases the overall learning outcome, especially for lower than average performing students.