ML-KNN: A lazy learning approach to multi-label learning
Pattern Recognition
Incremental Algorithms for Hierarchical Classification
The Journal of Machine Learning Research
A tutorial on spectral clustering
Statistics and Computing
Random k-Labelsets: An Ensemble Method for Multilabel Classification
ECML '07 Proceedings of the 18th European conference on Machine Learning
Classifier Chains for Multi-label Classification
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Multi-label dimensionality reduction via dependence maximization
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Structured max-margin learning for multi-label image annotation
Proceedings of the ACM International Conference on Image and Video Retrieval
Multi-label boosting for image annotation by structural grouping sparsity
Proceedings of the international conference on Multimedia
Graph-based methods for the automatic annotation and retrieval of art prints
Proceedings of the 1st ACM International Conference on Multimedia Retrieval
Direct Robust Matrix Factorizatoin for Anomaly Detection
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
Compressed labeling on distilled labelsets for multi-label learning
Machine Learning
Hi-index | 0.00 |
In multimedia retrieval, multi-label annotation for image, text and video is challenging and attracts rapidly growing interests in past decades. The main crux of multi-label annotation lies on 1) how to reduce the model complexity when the label space expands exponentially with the increase of the number of labels; and 2) how to leverage the label correlations which have broadly believed useful for boosting annotation performance. In this paper, we propose "labelsets anchored subspace ensemble (LASE)" to solve both problems in an efficient scheme, whose training is a regularized matrix decomposition and prediction is an inference of group sparse representations. In order to shrink the label space, we firstly introduce "label distilling" extracting the frequent labelsets to replace the original labels. In the training stage, the data matrix is decomposed as the sum of several low-rank matrices and a sparse residual via a randomized optimization, where each low-rank part defines a feature subspace mapped by a labelset. A manifold regularization is applied to map the labelset geometry to the geometry of the obtained subspaces. In the prediction stage, the group sparse representation of a new sample on the subspace ensemble is estimated by group lasso. The selected subspaces indicate the labelsets that the sample should be annotated with. Experiments on several benchmark datasets of texts, images, web data and videos validate the appealing performance of LASE in multi-label annotation.