Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Feature selection for ensembles
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Ensemble learning via negative correlation
Neural Networks
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Exploiting unlabeled data in ensemble methods
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Tri-Training: Exploiting Unlabeled Data Using Three Classifiers
IEEE Transactions on Knowledge and Data Engineering
Creating diverse ensemble classifiers to reduce supervision
Creating diverse ensemble classifiers to reduce supervision
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Semi-Supervised Boosting for Multi-Class Classification
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
NeC4.5: Neural Ensemble Based C4.5
IEEE Transactions on Knowledge and Data Engineering
SERBoost: Semi-supervised Boosting with Expectation Regularization
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
When Semi-supervised Learning Meets Ensemble Learning
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Semi-supervised learning with very few labeled training examples
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Constructing diverse classifier ensembles using artificial training examples
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
SemiBoost: Boosting for Semi-Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Semi-Supervised Learning
Semi-supervised learning by disagreement
Knowledge and Information Systems
Semi-Supervised Learning via Regularized Boosting Working on Multiple Semi-Supervised Assumptions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Exploiting Unlabeled Data to Enhance Ensemble Diversity
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
SETRED: self-training with editing
PAKDD'05 Proceedings of the 9th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
Simultaneous training of negatively correlated neural networks inan ensemble
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Improve Computer-Aided Diagnosis With Machine Learning Techniques Using Undiagnosed Samples
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Regularized Negative Correlation Learning for Neural Network Ensembles
IEEE Transactions on Neural Networks
CoTrade: Confident Co-Training With Data Editing
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Learning ensemble classifiers via restricted Boltzmann machines
Pattern Recognition Letters
Hi-index | 0.00 |
Ensemble learning learns from the training data by generating an ensemble of multiple base learners. It is well-known that to construct a good ensemble with strong generalization ability, the base learners are deemed to be accurate as well as diverse. In this paper, unlabeled data is exploited to facilitate ensemble learning by helping augment the diversity among the base learners. Specifically, a semi-supervised ensemble method named udeed, i.e. Unlabeled Data to Enhance Ensemble Diversity, is proposed. In contrast to existing semi-supervised ensemble methods which utilize unlabeled data by estimating error-prone pseudo-labels on them to enlarge the labeled data to improve base learners' accuracies, udeed works by maximizing accuracies of base learners on labeled data while maximizing diversity among them on unlabeled data. Extensive experiments on 20 regular-scale and five large-scale data sets are conducted under the setting of either few or abundant labeled data. Experimental results show that udeed can effectively utilize unlabeled data for ensemble learning via diversity augmentation, and is highly competitive to well-established semi-supervised ensemble methods.