Semi-supervised learning using label mean
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Semi-supervised particle filter for visual tracking
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Asymmetric semi-supervised boosting for SVM active learning in CBIR
Proceedings of the ACM International Conference on Image and Video Retrieval
Semi-supervised Bayesian ARTMAP
Applied Intelligence
Semi-supervised multi-class Adaboost by exploiting unlabeled data
Expert Systems with Applications: An International Journal
Visual tracking using online semi-supervised learning
ICIAR'11 Proceedings of the 8th international conference on Image analysis and recognition - Volume Part I
Unsupervised Weight Parameter Estimation Method for Ensemble Learning
Journal of Mathematical Modelling and Algorithms
Simultaneous clustering and classification over cluster structure representation
Pattern Recognition
Multi-agent adaptive boosting on semi-supervised water supply clusters
Advances in Engineering Software
A noise-detection based AdaBoost algorithm for mislabeled data
Pattern Recognition
Improving Logitboost with prior knowledge
Information Fusion
Exploiting unlabeled data to enhance ensemble diversity
Data Mining and Knowledge Discovery
A novel inductive semi-supervised SVM with graph-based self-training
IScIDE'12 Proceedings of the third Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
Boosting for multiclass semi-supervised learning
Pattern Recognition Letters
Hi-index | 0.15 |
Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.