Comparing probability measures using possibility theory: A notion of relative peakedness

  • Authors:
  • Didier Dubois;Eyke Hüllermeier

  • Affiliations:
  • Universite Paul Sabatier, Institut de Recherche en Informatique de Toulouse, 118 route de Narbonne, 31062 Toulouse Cedex 4, France;Faculty of Computer Science, University of Magdeburg, D-39106 Magdeburg, Germany

  • Venue:
  • International Journal of Approximate Reasoning
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Deciding whether one probability distribution is more informative (in the sense of representing a less indeterminate situation) than another one is typically done using well-established information measures such as, e.g., the Shannon entropy or other dispersion indices. In contrast, the relative specificity of possibility distributions is evaluated by means of fuzzy set inclusion. In this paper, we propose a technique for comparing probability distributions from the point of view of their relative dispersion without resorting to a numerical index. A natural partial ordering in terms of relative ''peakedness'' of probability functions is proposed which is closely related to order-1 stochastic dominance. There is also a close connection between this ordering on probability distributions and the standard specificity ordering on possibility distributions that can be derived by means of a known probability-possibility transformation. The paper proposes a direct proof showing that the (total) preordering on probability measures defined by probabilistic entropy refines the (partial) ordering defined by possibilistic specificity. This result, also valid for other dispersion indices, is discussed against the background of related work in statistics, mathematics (inequalities on convex functions), and the social sciences. Finally, an application of the possibilistic specificity ordering in the field of machine learning or, more specifically, the induction of decision forests is proposed.