Communications of the ACM
On the convergence of the coordinate descent method for convex differentiable minimization
Journal of Optimization Theory and Applications
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Randomizing Outputs to Increase Prediction Accuracy
Machine Learning
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Machine Learning
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Convex Optimization
Neural Computation
An Information Theoretic Perspective on Multiple Classifier Systems
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Multi-information ensemble diversity
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Ensembling local learners ThroughMultimodal perturbation
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Information Theory
Diversity regularized ensemble pruning
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Classifier ensemble using a heuristic learning with sparsity and diversity
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.00 |
Ensemble methods, which train multiple learners for a task, are among the state-of-the-art learning approaches. The diversity of the component learners has been recognized as a key to a good ensemble, and existing ensemble methods try different ways to encourage diversity, mostly by heuristics. In this paper, we propose the diversity regularized machine (DRM) in a mathematical programming framework, which efficiently generates an ensemble of diverse support vector machines (SVMs). Theoretical analysis discloses that the diversity constraint used in DRM can lead to an effective reduction on its hypothesis space complexity, implying that the diversity control in ensemble methods indeed plays a role of regularization as in popular statistical learning approaches. Experiments show that DRM can significantly improve generalization ability and is superior to some state-of-the-art SVM ensemble methods.