Exploring the limits of single-iteration clarification dialogs

  • Authors:
  • Jimmy Lin;Philip Wu;Dina Demner-Fushman;Eileen Abels

  • Affiliations:
  • University of Maryland, College Park, MD;University of Maryland, College Park, MD;University of Maryland, College Park, MD;University of Maryland, College Park, MD

  • Venue:
  • SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Single-iteration clarification dialogs, as implemented in the TREC HARD track, represent an attempt to introduce interaction into ad hoc retrieval, while preserving the many benefits of large-scale evaluations. Although previous experiments have not conclusively demonstrated performance gains resulting from such interactions, it is unclear whether these findings speak to the nature of clarification dialogs, or simply the limitations of current systems. To probe the limits of such interactions, we employed a human intermediary to formulate clarification questions and exploit user responses. In addition to establishing a plausible upper bound on performance, we were also able to induce an "ontology of clarifications" to characterize human behavior. This ontology, in turn, serves as the input to a regression model that attempts to determine which types of clarification questions are most helpful. Our work can serve to inform the design of interactive systems that initiate user dialogs.