The nature of statistical learning theory
The nature of statistical learning theory
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Solving the quadratic programming problem arising in support vector classification
Advances in kernel methods
SSVM: A Smooth Support Vector Machine for Classification
Computational Optimization and Applications
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Learning from Labeled and Unlabeled Data using Graph Mincuts
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
What Energy Functions Can Be Minimizedvia Graph Cuts?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tri-Training: Exploiting Unlabeled Data Using Three Classifiers
IEEE Transactions on Knowledge and Data Engineering
Semi-supervised learning with graphs
Semi-supervised learning with graphs
Building Projectable Classifiers of Arbitrary Complexity
ICPR '96 Proceedings of the 13th International Conference on Pattern Recognition - Volume 2
The Journal of Machine Learning Research
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Robust self-tuning semi-supervised learning
Neurocomputing
Active concept learning in image databases
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Reduced Support Vector Machines: A Statistical Theory
IEEE Transactions on Neural Networks
Support vector machines training data selection using a genetic algorithm
SSPR'12/SPR'12 Proceedings of the 2012 Joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Semi-supervised change detection using modified self-organizing feature map neural network
Applied Soft Computing
Hi-index | 0.00 |
Based on the reduced SVM, we propose a multi-view algorithm, two-teachers-one-student, for semi-supervised learning. With RSVM, different from typical multi-view methods, reduced sets suggest different views in the represented kernel feature space rather than in the input space. No label information is necessary when we select reduced sets, and this makes applying RSVM to SSL possible. Our algorithm blends the concepts of co-training and consensus training. Through co-training, the classifiers generated by two views can ''teach'' the third classifier from the remaining view to learn, and this process is performed for each choice of teachers-student combination. By consensus training, predictions from more than one view can give us higher confidence for labeling unlabeled data. The results show that the proposed 2T1S achieves high cross-validation accuracy, even compared to the training with all the label information available.