Sparse Approximate Solutions to Linear Systems
SIAM Journal on Computing
The nature of statistical learning theory
The nature of statistical learning theory
Artificial Intelligence Review - Special issue on lazy learning
Machine Learning
Atomic Decomposition by Basis Pursuit
SIAM Review
Machine Learning
Learning Belief Networks in the Presence of Missing Values and Hidden Variables
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tensor Decompositions and Applications
SIAM Review
Convex and Semi-Nonnegative Matrix Factorizations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing
Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing
Linear Regression for Face Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Metasample-Based Sparse Representation for Tumor Classification
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
CIBB'10 Proceedings of the 7th international conference on Computational intelligence methods for bioinformatics and biostatistics
Robust Visual Tracking and Vehicle Classification via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries
IEEE Transactions on Image Processing
Hi-index | 0.01 |
A non-negative least squares classifier is proposed in this paper for classifying under-complete data. The idea is that unknown samples can be approximated by sparse non-negative linear combinations of few training samples. Based on sparse coefficient vectors representing the training data, a sparse interpreter can then be used to predict the class label. We have devised new sparse methods which can learn data containing missing value, which can be trained on over-complete data, and which also apply to tensor data and to multi-class data. Permutation test shows that our approach requires a small number of training samples to obtain significant accuracy. Statistical comparisons on various data shows that our methods perform as well as support vector machines while being faster. Our approach is very robust to missing values and noise. We also show that with appropriate kernel functions, our methods perform very well on three-dimensional tensor data and run fairly fast.