The nature of statistical learning theory
The nature of statistical learning theory
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
A Bayesian Hierarchical Model for Learning Natural Scene Categories
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Computer Vision and Image Understanding
Projected Gradient Methods for Nonnegative Matrix Factorization
Neural Computation
Randomized Clustering Forests for Image Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Geometric Mean for Subspace Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unified Solution to Nonnegative Data Factorization Problems
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Estimating Labels from Label Proportions
The Journal of Machine Learning Research
Hierarchical ALS algorithms for nonnegative matrix and 3D tensor factorization
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Multi-label classification for image annotation via sparse similarity voting
ACCV'10 Proceedings of the 2010 international conference on Computer vision - Volume part II
Shared feature extraction for semi-supervised image classification
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Shared feature extraction for semi-supervised image classification
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Sparse transfer learning for interactive video search reranking
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Decoding by linear programming
IEEE Transactions on Information Theory
Hi-index | 0.01 |
In this work, we aim to solve the following multi-class inference problem: for given groups of unlabeled samples, a reliable multi-class classifier is expected to deterministically predict the label of each sample under the condition that only the class proportion information of each group is provided. Actually many modern applications can be abstracted to such a problem, e.g., large-scale images annotation, spam filtering, and improper content detection, where the class proportions of samples can be cheaply obtained while sample-wise labeling is prohibitive or quite hard. However, this problem has not been thoroughly investigated in previous works yet though it is much important in practice. The main challenging essentially lies on the severely under-determining itself. In this paper, we propose to utilize the natural sparsity of labels to alleviate this issue, and then formulate the classifier learning as a sparsity pursuit problem over a standard simplex. Moreover, due to the inapplicability of the popular @?"1-relaxation method for this case, we propose an optimization method to directly tackle the hard sparsity constraint, i.e., @?"0-constraint, based on the Augmented Lagrangian Multiplier (ALM) which can nicely provide a global convergence guarantee. It is noteworthy that our overall solution can not only directly predict the labels of the training and new samples, but also gracefully utilize the test samples to further boost the classification performance in a manner of semi-supervised learning. The experimental results on two benchmark datasets well validate the effectiveness of the proposed method.