Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient Approximations for the MarginalLikelihood of Bayesian Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
On Visualization and Aggregation of Nearest Neighbor Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Label Propagation through Linear Neighborhoods
IEEE Transactions on Knowledge and Data Engineering
Semi-supervised Learning Based on Label Propagation through Submanifold
ISNN '09 Proceedings of the 6th International Symposium on Neural Networks on Advances in Neural Networks
Introduction to Semi-Supervised Learning
Introduction to Semi-Supervised Learning
Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data
The Journal of Machine Learning Research
Multiscale Classification Using Nearest Neighbor Density Estimates
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Information Theory - Part 2
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Class imbalance and the curse of minority hubs
Knowledge-Based Systems
Hi-index | 0.10 |
In supervised classification, we learn from a training set of labeled observations to form a decision rule for classifying all unlabeled test cases. But if the training sample is small, one may fail to extract sufficient information from that sample to develop a good classifier. Because of the statistical instability of nonparametric methods, this problem becomes more evident in the case of nonparametric classification. In such cases, if one can extract useful information also from unlabeled test cases and use that to modify the classification rule, the performance of the resulting classifier can be improved substantially. In this article, we use a probabilistic framework to develop such methods for nearest neighbor classification. The resulting classifiers, called semi-supervised or transductive classifiers, usually perform better than supervised methods, especially when the training sample is small. Some benchmark data sets are analyzed to show the utility of these proposed methods.