Evaluating an Intelligent Tutoring System for Making Legal Arguments with Hypotheticals

  • Authors:
  • Niels Pinkwart;Kevin Ashley;Collin Lynch;Vincent Aleven

  • Affiliations:
  • Department of Informatics, Clausthal University of Technology, Clausthal-Zellerfeld, Germany. E-mail: niels.pinkwart@tu-clausthal.de;Learning Research and Development Center, School of Law, & Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, USA. E-mail: ashley@pitt.edu;Learning Research and Development Center & Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, USA. E-mail: collinl@cs.pitt.edu;Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. E-mail: aleven@cs.cmu.edu

  • Venue:
  • International Journal of Artificial Intelligence in Education
  • Year:
  • 2009

Quantified Score

Hi-index 0.02

Visualization

Abstract

Argumentation is a process that occurs often in ill-defined domains and that helps deal with the illdefinedness. Typically a notion of "correctness" for an argument in an ill-defined domain is impossible to define or verify formally because the underlying concepts are open-textured and the quality of the argument may be subject to discussion or even expert disagreement. Previous research has highlighted the advantages of graphical representations for learning argumentation skills. A number of intelligent tutoring systems have been built that support students in rendering arguments graphically, as they learn argumentation skills. The relative instructional benefits of graphical argument representations have not been reliably shown, however. In this paper we present a formative evaluation of LARGO (Legal ARgument Graph Observer), a system that enables law students graphically to represent examples of legal interpretation with hypotheticals they observe while reading texts of U.S. Supreme Court oral arguments. We hypothesized that, compared to a text-based alternative, LARGO's diagramming language geared toward depicting hypothetical reasoning processes, coupled with non-directive feedback, helps students better extract the important information from argument transcripts and better learn argumentation skills. A first pilot study, conducted with volunteer first-semester law students, provided support for the hypothesis. The system especially helped lower-aptitude students learn argumentation skills, and LARGO improved the reading skills of students as they studied expert arguments. A second study with LARGO was conducted as a mandatory part of a first-semester University law course. Although there were no differences in the learning outcomes of the two conditions, the second study showed some evidence that those students who engaged more with the argument diagrams through the advice did better than the text condition. One lesson learned from these two studies is that graphical representations in intelligent tutoring systems for the ill-defined domain of argumentation may still be better than text, but that engagement is essential.