Coping with poor advice from peers in peer-based intelligent tutoring: the case of avoiding bad annotations of learning objects

  • Authors:
  • John Champaign;Jie Zhang;Robin Cohen

  • Affiliations:
  • -;School of Computer Engineering, Nanyang, Singapore;-

  • Venue:
  • UMAP'11 Proceedings of the 19th international conference on User modeling, adaption, and personalization
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we examine a challenge that arises in the application of peer-based tutoring: coping with inappropriate advice from peers. We examine an environment where students are presented with those learning objects predicted to improve their learning (on the basis of the success of previous, like-minded students) but where peers can additionally inject annotations. To avoid presenting annotations that would detract from student learning (e.g. those found confusing by other students) we integrate trust modeling, to detect over time the reputation of the annotation (as voted by previous students) and the reputability of the annotator. We empirically demonstrate, through simulation, that even when the environment is populated with a large number of poor annotations, our algorithm for directing the learning of the students is effective, confirming the value of our proposed approach for student modeling. In addition, the research introduces a valuable integration of trust modeling into educational applications.