SIGMOD '95 Proceedings of the 1995 ACM SIGMOD international conference on Management of data
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Text Categorization with Suport Vector Machines: Learning with Many Relevant Features
ECML '98 Proceedings of the 10th European Conference on Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
The Concentration of Fractional Distances
IEEE Transactions on Knowledge and Data Engineering
Deformation Models for Image Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust self-tuning semi-supervised learning
Neurocomputing
BoostMap: An Embedding Method for Efficient Nearest Neighbor Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Regularization on Graphs with Function-adapted Diffusion Processes
The Journal of Machine Learning Research
Deep, big, simple neural nets for handwritten digit recognition
Neural Computation
Laplacian Support Vector Machines Trained in the Primal
The Journal of Machine Learning Research
Expert Systems with Applications: An International Journal
Hi-index | 12.05 |
We present a novel semi-supervised classifier model based on paths between unlabeled and labeled data through a sequence of local pattern transformations. A reliable measure of path-length is proposed that combines a local dissimilarity measure between consecutive patters along a path with a global, connectivity-based metric. We apply this model to problems of object recognition, for which we propose a practical classification algorithm based on sequences of ''Connected Image Transformations'' (CIT). Experimental results on four popular image benchmarks demonstrate how the proposed CIT classifier outperforms state-of-the-art semi-supervised techniques. The results are particularly significant when only a very small number of labeled patterns is available: the proposed algorithm obtains a generalization error of 4.57% on the MNIST data set trained on 2000 randomly chosen patterns with only 10 labeled patterns per digit class.