Hierarchical mixtures of experts and the EM algorithm
Neural Computation
On the exponential value of labeled samples
Pattern Recognition Letters
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Enhancing Supervised Learning with Unlabeled Data
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Finding Consistent Clusters in Data Partitions
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Multiclassifier Systems: Back to the Future
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Boosting, Bagging, and Consensus Based Classification of Multisource Remote Sensing Data
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Exploiting unlabeled data in ensemble methods
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Experts' Boasting in Trainable Fusion Rules
IEEE Transactions on Pattern Analysis and Machine Intelligence
Exploitation of Unlabeled Sequences in Hidden Markov Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Design and control of large collections of learning agents
Design and control of large collections of learning agents
Diverse ensembles for active learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Face recognition with semi-supervised learning and multiple classifiers
CIMMACS'06 Proceedings of the 5th WSEAS International Conference on Computational Intelligence, Man-Machine Systems and Cybernetics
Hybrid Hierarchical Classifiers for Hyperspectral Data Analysis
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Random relevant and non-redundant feature subspaces for co-training
IDEAL'09 Proceedings of the 10th international conference on Intelligent data engineering and automated learning
Co-training with relevant random subspaces
Neurocomputing
Combining committee-based semi-supervised learning and active learning
Journal of Computer Science and Technology
A new co-training-style random forest for computer aided diagnosis
Journal of Intelligent Information Systems
Unsupervised Weight Parameter Estimation Method for Ensemble Learning
Journal of Mathematical Modelling and Algorithms
Using co-training and self-training in semi-supervised multiple classifier systems
SSPR'06/SPR'06 Proceedings of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Cost-sensitive classification with inadequate labeled data
Information Systems
Information Sciences: an International Journal
Hi-index | 0.00 |
Multiple classifier systems have been originally proposed for supervised classification tasks. In the five editions of MCS workshop, most of the papers have dealt with design methods and applications of supervised multiple classifier systems. Recently, the use of multiple classifier systems has been extended to unsupervised classification tasks. Despite its practical relevance, semi-supervised classification has not received much attention. Few works on semi-supervised multiple classifiers appeared in the machine learning literature. This paper's goal is to review the background results that can be exploited to promote research on semi-supervised multiple classifier systems, and to outline some future research directions.