The nature of statistical learning theory
The nature of statistical learning theory
Texture Features for Browsing and Retrieval of Image Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Combining support vector and mathematical programming methods for classification
Advances in kernel methods
Hierarchical Discriminant Analysis for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Hand segmentation using learning-based prediction and verification for hand sign recognition
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Clustering Appearances of 3D Objects
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Relevance feedback: a power tool for interactive content-based image retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
Many visual learning tasks are usually confronted by some common difficulties. One of them is the lack of supervised information, due to the fact that labeling could be tedious, expensive or even impossible. Such scenario makes it challenging to learn object concepts from images. This problem could be alleviated by taking a hybrid of labeled and unlabeled training data for learning. Since the unlabeled data characterize the joint probability across different features, they could be used to boost weak classifiers by exploring discriminating features in a selfsupervised fashion. Discriminant-EM (D-EM) attacks such problems by integrating discriminant analysis with the EM framework. Both linear and nonlinear methods are investigated in this paper. Based on kernel multiple discriminant analysis (KMDA), the nonlinear D-EM provides better ability to simplify the probabilistic structures of data distributions in a discrimination space. We also propose a novel data-sampling scheme for efficient learning of kernel discriminants. Our experimental results showthat D-EM outperforms a variety of supervised and semi-supervised learning algorithms for many visual learning tasks, such as content-based image retrieval and invariant object recognition.