The significance of evaluation in AI and law: a case study re-examining ICAIL proceedings

  • Authors:
  • Jack G. Conrad;John Zeleznikow

  • Affiliations:
  • Thomson Reuters Global Resources, Baar, Switzerland CH;Victoria University, Melbourne, Australia

  • Venue:
  • Proceedings of the Fourteenth International Conference on Artificial Intelligence and Law
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper examines the presence of performance evaluation in works published at ICAIL conferences since 2000. As such, it is a self-reflexive, meta-level study that investigates the proportion of works that include some form of performance assessment in their contribution. It also reports on the categories of evaluation present as well as their degree. In addition, the paper compares current trends in performance measurement with those of earlier ICAILs, as reported in the Hall and Zeleznikow work on the same topic (ICAIL 2001). The paper also develops an argument for why evaluation in formal Artificial Intelligence and Law reports such as ICAIL proceedings is imperative. It underscores the importance of answering the question: how good is the system?, how reliable is the approach?, or, more succinctly, does it work? The paper argues that the presence of a performance-based ethic within a scientific research community is a sign of maturity and essential scientific rigor. Finally the work references an evaluation checklist and presents a set of recommended best practices for the inclusion of evaluation methods going forward.