Replication and automation of expert judgments: information engineering in legal E-discovery

  • Authors:
  • Bruce Hedin;Douglas W. Oard

  • Affiliations:
  • H5, San Francisco, CA;College of Information Studies & UMIACS CLIP Lab, University of Maryland, College Park, MD

  • Venue:
  • SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The retrieval of digital evidence responsive to discovery requests in civil litigation, known in the United States as "e-discovery," presents several important and understudied conditions and challenges. Among the most important of these are (i) that the definition of responsiveness that governs the search effort can be learned and made explicit through effective interaction with the responding party, (ii) that the governing definition of responsiveness is generally complex, deriving both from considerations of subject-matter relevance and from considerations of litigation strategy, and (iii) that the result of the search effort is a set (rather than a ranked list) of documents, and sometimes a quite large set, that is turned over to the requesting party and that the responding party certifies to be an accurate and complete response to the request. This paper describes the design of an "Interactive Task" for the Text Retrieval Conference's Legal Track that had the evaluation of the effectiveness of ediscovery applications at the "responsive review" task as its goal. Notable features of the 2008 Interactive Task were high-fidelity human-system task modeling, authority control for the definition of "responsiveness," and relatively deep sampling for estimation of type 1 and type 2 errors (expressed as "precision" and "recall"). The paper presents a critical assessment of the strengths and weaknesses of the evaluation design from the perspectives of reliability, reusability, and cost-benefit tradeoffs.