Abnormal event detection via multi-instance dictionary learning
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
Self-paced dictionary learning for image classification
Proceedings of the 20th ACM international conference on Multimedia
Sparse embedding: a framework for sparsity promoting dimensionality reduction
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Dictionary learning for image prediction
Journal of Visual Communication and Image Representation
Randomness and sparsity induced codebook learning with application to cancer image classification
MCV'12 Proceedings of the Second international conference on Medical Computer Vision: recognition techniques and applications in medical imaging
Bilinear discriminative dictionary learning for face recognition
Pattern Recognition
Engineering Applications of Artificial Intelligence
Semi-supervised learning via sparse model
Neurocomputing
Hi-index | 0.14 |
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.