C4.5: programs for machine learning
C4.5: programs for machine learning
The nature of statistical learning theory
The nature of statistical learning theory
Training Invariant Support Vector Machines
Machine Learning
A parallel mixture of SVMs for very large scale problems
Neural Computation
Distributed learning with bagging-like performance
Pattern Recognition Letters
CombNET-III with Nonlinear Gating Network and Its Application in Large-Scale Classification Problems
IEICE - Transactions on Information and Systems
Optimized fixed-size kernel models for large data sets
Computational Statistics & Data Analysis
Hi-index | 0.00 |
Collobert, Bengio, and Bengio (2002) recently introduced a novel approach to using a neural network to provide a class prediction from an ensemble of support vector machines (SVMs). This approach has the advantage that the required computation scales well to very large data sets. Experiments on the Forest Cover data set show that this parallel mixture is more accurate than a single SVM, with 90.72% accuracy reported on an independent test set. Although this accuracy is impressive, their article does not consider alternative types of classifiers. We show that a simple ensemble of decision trees results in a higher accuracy, 94.75%, and is computationally efficient. This result is somewhat surprising and illustrates the general value of experimental comparisons using different types of classifiers.