Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Advances in Large Margin Classifiers
Advances in Large Margin Classifiers
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A Database for Handwritten Text Recognition Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Unsupervised Improvement of Visual Detectors using Co-Training
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Support Vector Data Description
Machine Learning
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Semi-Supervised Cross Feature Learning for Semantic Concept Detection in Videos
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
A Maximum Entropy Framework for Part-Based Texture and Object Recognition
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
One-Shot Learning of Object Categories
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast protein classification with multiple networks
Bioinformatics
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Label Propagation through Linear Neighborhoods
IEEE Transactions on Knowledge and Data Engineering
Kernel-based linear neighborhood propagation for semantic video annotation
PAKDD'07 Proceedings of the 11th Pacific-Asia conference on Advances in knowledge discovery and data mining
Semi-Supervised Learning
Learning to recognize objects from unseen modalities
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Aspects of semi-supervised and active learning in conditional random fields
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Pattern Recognition Letters
Geometric $/ell$_p-norm feature pooling for image classification
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Efficient similarity derived from kernel-based transition probability
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Hi-index | 0.01 |
For improving the classification performance on the cheap, it is necessary to exploit both labeled and unlabeled samples by applying semi-supervised learning methods, most of which are built upon the pair-wise similarities between the samples. While the similarities have so far been formulated in a heuristic manner such as by k-NN, we propose methods to construct similarities from the probabilistic viewpoint. The kernel-based formulation of a transition probability is first proposed via comparing kernel least squares to variational least squares in the probabilistic framework. The formulation results in a simple quadratic programming which flexibly introduces the constraint to improve practical robustness and is efficiently computed by SMO. The kernel-based transition probability is by nature favorably sparse even without applying k-NN and induces the similarity measure of the same characteristics. Besides, to cope with multiple types of kernel functions, the multiple transition probabilities obtained correspondingly from the kernels can be probabilistically integrated with prior probabilities represented by linear weights. We propose a computationally efficient method to optimize the weights in a discriminative manner. The optimized weights contribute to a composite similarity measure straightforwardly as well as to integrate the multiple kernels themselves as multiple kernel learning does, which consequently derives various types of multiple kernel based semi-supervised classification methods. In the experiments on semi-supervised classification tasks, the proposed methods demonstrate favorable performances, compared to the other methods, in terms of classification performances and computation time.