ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Online dictionary learning for sparse coding
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Exploring self-similarities of bag-of-features for image classification
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Bilinear deep learning for image classification
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Learning a discriminative dictionary for sparse coding via label consistent K-SVD
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Task-Driven Dictionary Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Close the loop: Joint blind image restoration and recognition with sparse representation prior
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Learning specific-class segmentation from diverse data
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
Image classification is an important research task in multimedia content analysis and processing. Learning a compact dictionary easying to derive sparse representation is one of the focused issues in the state-of-the-art image classification framework. Most existing dictionary learning approaches assign equal importance to all training samples, which in fact have different complexity in terms of sparse representation. Meanwhile, the contextual information "hidden" in different samples is ignored as well. In this paper, we propose a self-paced dictionary learning algorithm in order to accommodate the "hidden" information of the samples into the learning procedure, which uses the easy samples to train the dictionary first, and then iteratively introduces more complex samples in the remaining training procedure until the entire training data are all easy samples. The algorithm adaptively chooses the easy samples in each iteration, while the learned dictionary in the previous iteration is in turn used as a basis for the current iteration. This strategy implicitly takes advantage of the contextual relationships among training samples. The number of the chosen samples in each iteration is determined by an adaptive threshold function proposed in this paper. Experimental results on benchmark datasets, including Caltech-101 and 15-Scene, show that our algorithm leads to better dictionary representation and classification performance than the baseline methods.