Algorithms for clustering data
Algorithms for clustering data
The Combination of Evidence in the Transferable Belief Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
Original Contribution: Stacked generalization
Neural Networks
Artificial Intelligence
Combining the results of several neural network classifiers
Neural Networks
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
New Measure of Classifier Dependency in Multiple Classifier Systems
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
The Knowledge Engineering Review
Pairwise classifier combination using belief functions
Pattern Recognition Letters
Data & Knowledge Engineering
Building ensemble classifiers using belief functions and OWA operators
Soft Computing - A Fusion of Foundations, Methodologies and Applications
A definition of subjective possibility
International Journal of Approximate Reasoning
The combination of multiple classifiers using an evidential reasoning approach
Artificial Intelligence
International Journal of Approximate Reasoning
International Journal of Approximate Reasoning
Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
Combination of partially non-distinct beliefs: The cautious-adaptive rule
International Journal of Approximate Reasoning
RECM: Relational evidential c-means algorithm
Pattern Recognition Letters
The canonical decomposition of a weighted belief
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Hierarchical and conditional combination of belief functions induced by visual tracking
International Journal of Approximate Reasoning
Analysis of evidence-theoretic decision rules for pattern classification
Pattern Recognition
International Journal of Approximate Reasoning
Aggregating multiple classification results using fuzzy integration and stochastic feature selection
International Journal of Approximate Reasoning
Belief functions contextual discounting and canonical decompositions
International Journal of Approximate Reasoning
Assessing sensor reliability for multisensor data fusion within the transferable belief model
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Evaluating Sensor Reliability in Classification Problems Based on Evidence Theory
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A neural network classifier based on Dempster-Shafer theory
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Constructing dynamic frames of discernment in cases of large number of classes
ECSQARU'11 Proceedings of the 11th European conference on Symbolic and quantitative approaches to reasoning with uncertainty
The impact of diversity on the accuracy of evidential classifier ensembles
International Journal of Approximate Reasoning
A skin detection approach based on the Dempster--Shafer theory of evidence
International Journal of Approximate Reasoning
Theory of evidence for face detection and tracking
International Journal of Approximate Reasoning
International Journal of Approximate Reasoning
A new belief-based K-nearest neighbor classification method
Pattern Recognition
A proof for the positive definiteness of the Jaccard index matrix
International Journal of Approximate Reasoning
Hi-index | 0.00 |
When combining classifiers in the Dempster-Shafer framework, Dempster's rule is generally used. However, this rule assumes the classifiers to be independent. This paper investigates the use of other operators for combining non independent classifiers, including the cautious rule and, more generally, t-norm based rules with behavior ranging between Dempster's rule and the cautious rule. Two strategies are investigated for learning an optimal combination scheme, based on a parameterized family of t-norms. The first one learns a single rule by minimizing an error criterion. The second strategy is a two-step procedure, in which groups of classifiers with similar outputs are first identified using a clustering algorithm. Then, within- and between-cluster rules are determined by minimizing an error criterion. Experiments with various synthetic and real data sets demonstrate the effectiveness of both the single rule and two-step strategies. Overall, optimizing a single t-norm based rule yields better results than using a fixed rule, including Dempster's rule, and the two-step strategy brings further improvements.