Approximations for efficient computation in the theory of evidence
Artificial Intelligence
A comparative study of similarity measures
Fuzzy Sets and Systems
Towards general measures of comparison of objects
Fuzzy Sets and Systems - Special issue dedicated to the memory of Professor Arnold Kaufmann
Possibilistic instance-based learning
Artificial Intelligence - Special issue: Fuzzy set and possibility theory-based methods in artificial intelligence
Review: Measures of divergence on credal sets
Fuzzy Sets and Systems
A distance measure for bounding probabilistic belief change
International Journal of Approximate Reasoning
An evidence-theoretic k-NN rule with parameter optimization
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
The modified Dempster-Shafer approach to classification
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
On the Definition of Essential and Contingent Properties of Subjective Belief Bases
MICAI '08 Proceedings of the 7th Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
On the Use of Clustering in Possibilistic Decision Tree Induction
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
A new possibilistic clustering method: the possibilistic K-modes
AI*IA'11 Proceedings of the 12th international conference on Artificial intelligence around man and beyond
Measuring and analyzing agents' uncertainty in argumentation-based negotiation dialogue games
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
This paper addresses the issue of measuring similarity between pieces of uncertain information in the framework of possibility theory. In a first part, natural properties of such functions are proposed and a survey of the few existing measures is presented. Then, a new measure so-called Information Affinity is proposed to overcome the limits of the existing ones. The proposed function is based on two measures, namely, a classical informative distance, e.g. Manhattan distance which evaluates the difference, degree by degree, between two normalized possibility distributions and the well known inconsistency measure which assesses the conflict between the two possibility distributions. Some potential applications of the proposed measure are also mentioned in this paper.