A data-dependent skeleton estimate for learning
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Semi-supervised support vector machines
Proceedings of the 1998 conference on Advances in neural information processing systems II
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
An Affine Invariant Interest Point Detector
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Convex Optimization
A PAC-Style model for learning from labeled and unlabeled data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Generalization error bounds using unlabeled data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Support vector machine to synthesise kernels
Proceedings of the First international conference on Deterministic and Statistical Methods in Machine Learning
Sparse Semi-supervised Learning Using Conjugate Functions
The Journal of Machine Learning Research
CoNet: feature generation for multi-view semi-supervised learning with partially observed views
Proceedings of the 21st ACM international conference on Information and knowledge management
Neighborhood Correlation Analysis for Semi-paired Two-View Data
Neural Processing Letters
Hi-index | 0.01 |
In this paper we show that the semi-supervised learning with two input sources can be transformed into a maximum margin problem to be similar to a binary support vector machine. Our formulation exploits the unlabeled data to reduce the complexity of the class of the learning functions. In order to measure how the complexity is decreased we use the Rademacher complexity theory. The corresponding optimization problem is convex and it is efficiently solvable for large-scale applications as well.