Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Beyond the point cloud: from transductive to semi-supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Large scale semi-supervised linear SVMs
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Semi-Supervised Learning (Adaptive Computation and Machine Learning)
Semi-Supervised Learning (Adaptive Computation and Machine Learning)
The Journal of Machine Learning Research
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Large-Scale Clustering through Functional Embedding
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Keepin' it real: semi-supervised learning with realistic tuning
SemiSupLearn '09 Proceedings of the NAACL HLT 2009 Workshop on Semi-Supervised Learning for Natural Language Processing
Efficient large-scale image annotation by probabilistic collaborative multi-label propagation
Proceedings of the international conference on Multimedia
Semi-Supervised Learning with Measure Propagation
The Journal of Machine Learning Research
Large-scale multilabel propagation based on efficient sparse graph construction
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Local learning integrating global structure for large scale semi-supervised classification
Computers & Mathematics with Applications
Hi-index | 0.00 |
We show how the regularizer of Transductive Support Vector Machines (TSVM) can be trained by stochastic gradient descent for linear models and multi-layer architectures. The resulting methods can be trained online, have vastly superior training and testing speed to existing TSVM algorithms, can encode prior knowledge in the network architecture, and obtain competitive error rates. We then go on to propose a natural generalization of the TSVM loss function that takes into account neighborhood and manifold information directly, unifying the two-stage Low Density Separation method into a single criterion, and leading to state-of-the-art results.