Assessing the aggregation of parameterized imprecise classification

  • Authors:
  • Isabela Drummond;Joaquim Melendez;Sandra Sandri

  • Affiliations:
  • LAC-INPE, Brazil;IIA/UDG, Spain;IIIA/CSIC, Spain

  • Venue:
  • Proceedings of the 2006 conference on Artificial Intelligence Research and Development
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work is based on classifiers that can yield possibilistic valuations as output, that may have been obtained from a labeled data set either directly as such, by possibilistic classifiers, or by transforming the output of probabilistic classifiers or else by adapting prototype-based classifiers in general. Imprecise classifications are elicited from the possibilistic valuations by varying a parameter that makes the overall classification become either more or less precise. We discuss some accu-racy measures to assess the quality of the parameterized imprecise classifications, thus allowing the user to choose the most suitable level of imprecision for a given application. Here we particularly address the issue of aggregating parameterized aggregation classifiers, and assessing their performance.