Generation of problems, answers, grade, and feedback---case study of a fully automated tutor

  • Authors:
  • Amruth N. Kumar

  • Affiliations:
  • Ramapo College of New Jersey, Mahwah, NJ

  • Venue:
  • Journal on Educational Resources in Computing (JERIC)
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Researchers and educators have been developing tutors to help students learn by solving problems. The tutors vary in their ability to generate problems, generate answers, grade student answers, and provide feedback. At one end of the spectrum are tutors that depend on hand-coded problems, answers, and feedback. These tutors can be expected to be pedagogically effective, since all the problem-solving content is carefully hand-crafted by a teacher. However, their repertoire is limited. At the other end of the spectrum are tutors that can automatically generate problems, answers, and feedback. They have an unlimited repertoire, but it is not clear that they are effective in helping students learn. Most extant tutors lie somewhere along this spectrum.In this article we examine the feasibility of developing a tutor that can automatically generate problems, generate answers, grade student answers, and provide feedback. We investigate whether such a tutor can help students learn. For our study, we considered a tutor for our Programming Languages course, which covers static and dynamic scope (i.e., static scope of variables and procedures, dynamic scope of variables, and static and dynamic referencing environment of procedures in the context of a language that permits nested procedure definitions). The tutor generates simple and complex problems on each of these five topics, solves the problems, grades the students' answers, and provides feedback about incorrect and missed answers. Our evaluation over two semesters shows that the feedback provided by the tutor helps improve student learning.