Supervised Incremental Learning with the Fuzzy ARTMAP Neural Network
ANNPR '08 Proceedings of the 3rd IAPR workshop on Artificial Neural Networks in Pattern Recognition
Empirical analysis of support vector machine ensemble classifiers
Expert Systems with Applications: An International Journal
Zone analysis: a visualization framework for classification problems
Artificial Intelligence Review
The diversity/accuracy dilemma: an empirical analysis in the context of heterogeneous ensembles
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Feature selection in heterogeneous structure of ensembles: a genetic algorithm approach
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A dynamic classifier ensemble selection approach for noise data
Information Sciences: an International Journal
Expert Systems with Applications: An International Journal
Ensembles based on random projections to improve the accuracy of clustering algorithms
WIRN'05 Proceedings of the 16th Italian conference on Neural Nets
Classifiers selection in ensembles using genetic algorithms for bankruptcy prediction
Expert Systems with Applications: An International Journal
A dynamic model selection strategy for support vector machine classifiers
Applied Soft Computing
Multi-level clustering support vector machine trees for improved protein local structure prediction
International Journal of Data Mining and Bioinformatics
Hi-index | 0.00 |
Recently, bias-variance decomposition of error has been used as a tool to study the behavior of learning algorithms and to develop new ensemble methods well suited to the bias-variance characteristics of base learners. We propose methods and procedures, based on Domingo's unified bias-variance theory, to evaluate and quantitatively measure the bias-variance decomposition of error in ensembles of learning machines. We apply these methods to study and compare the bias-variance characteristics of single support vector machines (SVMs) and ensembles of SVMs based on resampling techniques, and their relationships with the cardinality of the training samples. In particular, we present an experimental bias-variance analysis of bagged and random aggregated ensembles of SVMs in order to verify their theoretical variance reduction properties. The experimental bias-variance analysis quantitatively characterizes the relationships between bagging and random aggregating, and explains the reasons why ensembles built on small subsamples of the data work with large databases. Our analysis also suggests new directions for research to improve on classical bagging.