Feature selection in the Laplacian support vector machine
Computational Statistics & Data Analysis
CIARP'10 Proceedings of the 15th Iberoamerican congress conference on Progress in pattern recognition, image analysis, computer vision, and applications
Can irrelevant data help semi-supervised learning, why and how?
Proceedings of the 20th ACM international conference on Information and knowledge management
Analysis of presence-only data via semi-supervised learning approaches
Computational Statistics & Data Analysis
Robust predictive model for evaluating breast cancer survivability
Engineering Applications of Artificial Intelligence
Joint semi-supervised learning of Hidden Conditional Random Fields and Hidden Markov Models
Pattern Recognition Letters
Sharpened graph ensemble for semi-supervised learning
Intelligent Data Analysis
Hi-index | 0.00 |
In classification, semisupervised learning usually involves a large amount of unlabeled data with only a small number of labeled data. This imposes a great challenge in that it is difficult to achieve good classification performance through labeled data alone. To leverage unlabeled data for enhancing classification, this article introduces a large margin semisupervised learning method within the framework of regularization, based on an efficient margin loss for unlabeled data, which seeks efficient extraction of the information from unlabeled data for estimating the Bayes decision boundary for classification. For implementation, an iterative scheme is derived through conditional expectations. Finally, theoretical and numerical analyses are conducted, in addition to an application to gene function prediction. They suggest that the proposed method enables to recover the performance of its supervised counterpart based on complete data in rates of convergence, when possible.