Tradeoff analysis between knowledge assessment approaches

  • Authors:
  • Michel C. Desmarais;Shunkai Fu;Xiaoming Pu

  • Affiliations:
  • Polytechnique de Montréal;Polytechnique de Montréal;Polytechnique de Montréal

  • Venue:
  • Proceedings of the 2005 conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The problem of modeling and assessing an individual's ability level is central to learning environments. Numerous approaches exists to this end. Computer Adaptive Testing (CAT) techniques, such as IRT and Bayesian posterior updating, are amongst the early approaches. Bayesian networks and graphs models are more recent approaches to this problem. These frameworks differ on their expressiveness and on their ability to automate model building and calibration with empirical data. We discuss the implication of expressiveness and data-driven properties of different frameworks, and analyze how it affects the applicability and accuracy of the knowledge assessment process. We conjecture that although expressive models such as Bayesian networks provide better cognitive diagnostic ability, their applicability, reliability, and accuracy is strongly affected by the knowledge engineering effort they require. We conclude with a comparative analysis of data driven approaches and provide empirical estimates of their respective performance for two data sets.