Evaluation methods and strategies for the interactive use of classifiers

  • Authors:
  • Silvia Acid;Luis M. de Campos;Moisés Fernández

  • Affiliations:
  • Departamento de Ciencias de la Computación e Inteligencia Artificial, E.T.S.I. Informática y de Telecomunicación, CITIC-UGR Universidad de Granada, 18071 Granada, Spain;Departamento de Ciencias de la Computación e Inteligencia Artificial, E.T.S.I. Informática y de Telecomunicación, CITIC-UGR Universidad de Granada, 18071 Granada, Spain;Departamento de Ciencias de la Computación e Inteligencia Artificial, E.T.S.I. Informática y de Telecomunicación, CITIC-UGR Universidad de Granada, 18071 Granada, Spain

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the scenario in which an automatic classifier (previously built) is available. It is used to classify new instances but, in some cases, the classifier may request the intervention of a human (the oracle), who gives it the correct class. In this scenario, first it is necessary to study how the performance of the system should be evaluated, as it cannot be based solely on the predictive accuracy obtained by the classifier but it should also take into account the cost of the human intervention; second, studying the concrete circumstances under which the classifier decides to query the oracle is also important. In this paper we study these two questions and include also an experimental evaluation of the different proposed alternatives.