.879-approximation algorithms for MAX CUT and MAX 2SAT
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Enhancing Supervised Learning with Unlabeled Data
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Learning from Labeled and Unlabeled Data using Graph Mincuts
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
The complexity of theorem-proving procedures
STOC '71 Proceedings of the third annual ACM symposium on Theory of computing
Matching: a well-solved class of integer linear programs
Combinatorial optimization - Eureka, you shrink!
Semi-supervised learning using randomized mincuts
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Beyond the point cloud: from transductive to semi-supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
The rendezvous algorithm: multiclass semi-supervised learning with Markov random walks
Proceedings of the 24th international conference on Machine learning
Simple, robust, scalable semi-supervised learning via expectation regularization
Proceedings of the 24th international conference on Machine learning
Yet another algorithm for dense max cut: go greedy
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Graph transduction via alternating minimization
Proceedings of the 25th international conference on Machine learning
Optimization Techniques for Semi-Supervised Support Vector Machines
The Journal of Machine Learning Research
Graph construction and b-matching for semi-supervised learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Deep learning from temporal coherence in video
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning from labeled and unlabeled data: an empirical study across techniques and domains
Journal of Artificial Intelligence Research
Introduction to Semi-Supervised Learning
Introduction to Semi-Supervised Learning
Geometry of Cuts and Metrics
Semi-Supervised Learning
Laplacian Support Vector Machines Trained in the Primal
The Journal of Machine Learning Research
Single network relational transductive learning
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Graph-based semi-supervised learning (SSL) methods play an increasingly important role in practical machine learning systems, particularly in agnostic settings when no parametric information or other prior knowledge is available about the data distribution. Given the constructed graph represented by a weight matrix, transductive inference is used to propagate known labels to predict the values of all unlabeled vertices. Designing a robust label diffusion algorithm for such graphs is a widely studied problem and various methods have recently been suggested. Many of these can be formalized as regularized function estimation through the minimization of a quadratic cost. However, most existing label diffusion methods minimize a univariate cost with the classification function as the only variable of interest. Since the observed labels seed the diffusion process, such univariate frameworks are extremely sensitive to the initial label choice and any label noise. To alleviate the dependency on the initial observed labels, this article proposes a bivariate formulation for graph-based SSL, where both the binary label information and a continuous classification function are arguments of the optimization. This bivariate formulation is shown to be equivalent to a linearly constrained Max-Cut problem. Finally an efficient solution via greedy gradient Max-Cut (GGMC) is derived which gradually assigns unlabeled vertices to each class with minimum connectivity. Once convergence guarantees are established, this greedy Max-Cut based SSL is applied on both artificial and standard benchmark data sets where it obtains superior classification accuracy compared to existing state-of-the-art SSL methods. Moreover, GGMC shows robustness with respect to the graph construction method and maintains high accuracy over extensive experiments with various edge linking and weighting schemes.