IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Multiple Classifier Combination Methodologies for Different Output Levels
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Classifier Combinations: Implementations and Theoretical Issues
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Experimental Results on the Construction of Multiple Classifiers Recognizing Handwritten Numerals
ICDAR '01 Proceedings of the Sixth International Conference on Document Analysis and Recognition
On the Relation Between Dependence and Diversity in Multiple Classifier Systems
ITCC '05 Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume I - Volume 01
Selection of Classifiers for the Construction of Multiple Classifier Systems
ICDAR '05 Proceedings of the Eighth International Conference on Document Analysis and Recognition
Using diversity of errors for selecting members of a committee classifier
Pattern Recognition
A co-evolving decision tree classification method
Expert Systems with Applications: An International Journal
Multiple classifier systems in remote sensing: from basics to recent developments
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
Hi-index | 0.00 |
Ensembles need their base classifiers do not always agree for any prediction (diverse base classifiers). Disturbing Neighbors ($\mathcal{DN}$) is a method for improving the diversity of the base classifiers of any ensemble algorithm. $\mathcal{DN}$ builds for each base classifier a set of extra features based on a 1-Nearest Neighbors (1-NN) output. These 1-NN are built using a small subset of randomly selected instances from the training dataset. $\mathcal{DN}$ has already been proved successfully on unstable base classifiers (i.e. decision trees). This paper presents an experimental validation on 62 UCI datasets for standard ensemble methods using Support Vector Machines (SVM) with a linear kernel as base classifiers. SVMs are very stable, so it is hard to increase their diversity when they belong to an ensemble. However, experiments will show that $\mathcal{DN}$ usually improves ensemble accuracy and base classifiers diversity.