Argument Diagramming and Diagnostic Reliability

  • Authors:
  • Collin Lynch;Kevin D. Ashley;Niels Pinkwart;Vincent Aleven

  • Affiliations:
  • LRDC & Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, USA (collinl@cs.pitt.edu);LRDC & Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, USA (collinl@cs.pitt.edu) and School of Law, University of Pittsburgh (ashley@pitt.edu);Department of Informatics, Clausthal University of Technology, Clausthal, Lower Saxony, Germany (niels.pinkwart@tu-clausthal.de);Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA (aleven@cs.cmu.edu)

  • Venue:
  • Proceedings of the 2009 conference on Legal Knowledge and Information Systems: JURIX 2009: The Twenty-Second Annual Conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Diagrammatic models of argument are increasingly prominent in AI and Law. Unlike everyday language these models formalize many of the the components and relationships present in arguments and permit a more formal analysis of an arguments' structural weaknesses. Formalization, however, can raise problems of agreement. In order for argument diagramming to be widely accepted as a communications tool, individual authors and readers must be able to agree on the quality and meaning of a diagram as well as the role that key components play. This is especially problematic when arguers seek to map their diagrams to or from more conventional prose. In this paper we present results from a grader agreement study that we have conducted using LARGO diagrams. We then describe a detailed example of disagreement and highlight its implications for both our diagram model and modeling argument diagrams in general.