The Combination of Evidence in the Transferable Belief Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
ML-KNN: A lazy learning approach to multi-label learning
Pattern Recognition
Multilabel classification via calibrated label ranking
Machine Learning
Random k-Labelsets: An Ensemble Method for Multilabel Classification
ECML '07 Proceedings of the 18th European conference on Machine Learning
Ml-rbf: RBF Neural Networks for Multi-Label Learning
Neural Processing Letters
An Evidence-Theoretic k-Nearest Neighbor Rule for Multi-label Classification
SUM '09 Proceedings of the 3rd International Conference on Scalable Uncertainty Management
Representing uncertainty on set-valued variables using belief functions
Artificial Intelligence
Classification Using Belief Functions: Relationship Between Case-Based and Model-Based Approaches
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Random subspace evidence classifier
Neurocomputing
Hi-index | 0.00 |
Multi-label classification problems arise in many real-world applications. Classically, in order to construct a multi-label classifier, we assume the existence of a labeled training set, where each instance is associated with a set of labels, and the task is to output a label set for each unseen instance. However, it is not always possible to have perfectly labeled data. In many problems, there is no ground truth for assigning unambiguously a label set to each instance, and several experts have to be consulted. Due to conflicts and lack of knowledge, labels might be wrongly assigned to some instances. This paper describes an evidence formalism suitable to study multi-label classification problems where the training datasets are imperfectly labelled. Several applications demonstrate the efficiency of our apporach.