Neural networks and the bias/variance dilemma
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
A Unifeid Bias-Variance Decomposition and its Applications
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A Unified Bias-Variance Decomposition for Zero-One and Squared Loss
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Data Complexity Analysis for Classifier Combination
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Automatic Model Selection in a Hybrid Perceptron/Radial Network
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Complexity of Data Subsets Generated by the Random Subspace Method: An Experimental Investigation
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Practical Bias Variance Decomposition
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Hierarchical annotation of medical images
Pattern Recognition
Off-line cursive script recognition: current advances, comparisons and remaining problems
Artificial Intelligence Review
Artificial Intelligence in Medicine
Hi-index | 0.00 |
Accuracy, diversity, and learning characteristics of base learners critically influence the effectiveness of ensemble methods. Bias-variance decomposition of the error can be used as a tool to gain insights into the behavior of learning algorithms, in order to properly design ensemble methods well-tuned to the properties of a specific base learner. In this work we analyse bias-variance decomposition of the error in Support Vector Machines (SVM), characterizing it with respect to the kernel and its parameters. We show that the bias-variance decomposition offers a rationale to develop ensemble methods using SVMs as base learners, and we outline two directions for developing SVM ensembles, exploiting the SVM bias characteristics and the bias-variance dependence on the kernel parameters.