The Combination of Evidence in the Transferable Belief Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
Artificial Intelligence
Combining the results of several neural network classifiers
Neural Networks
Machine Learning
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Analysis of evidence-theoretic decision rules for pattern classification
Pattern Recognition
An evidence-theoretic k-NN rule with parameter optimization
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.00 |
Uncertainty representation is a major issue in pattern recognition when the outputs of a classifier do not lead directly to a final decision, but are used in combination with other systems, or as input to an interactive decision process. In such contexts, it may be advantageous to resort to rich and flexible formalisms for representing and manipulating uncertain information, such as the Dempster-Shafer theory of Evidence. In this paper, it is shown that the quality and reliability of the outputs from an evidence-theoretic classifier may be improved using an adaptation from a resample-and-combine approach introduced by Breiman and known as "bagging". This approach is explained and studied experimentally using simulated data. In particular, results show that bagging improves classification accuracy and limits the influence of outliers and ambiguous training patterns.