Multi-frame compression: theory and design
Signal Processing - Special section on signal processing technologies for short burst wireless communications
Boosting Nearest Neighbor Classi.ers for Multiclass Recognition
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
Locality sensitive semi-supervised feature selection
Neurocomputing
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Online dictionary learning for sparse coding
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
On the Dimensionality Reduction for Sparse Representation Based Face Recognition
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries
IEEE Transactions on Image Processing
L1 graph based on sparse coding for feature selection
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
Sparse coding, which produces a vector representation based on sparse linear combination of dictionary atoms, has been widely applied in signal processing, data mining and neuroscience. Constructing a proper dictionary for sparse coding is a common challenging problem. In this paper, we treat dictionary learning as an unsupervised learning process, and propose a Laplacian score dictionary (LSD). This new learning method uses local geometry information to select atoms for the dictionary. Comparisons with alternative clustering based dictionary learning methods are conducted. We also compare LSD with full-training-datadictionary and others classic methods in the experiments. The classification performances on binary-class datasets and multi-class datasets from UCI repository demonstrate the effectiveness and efficiency of our method.