On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems
Theoretical Computer Science
Atomic Decomposition by Basis Pursuit
SIAM Review
Classification via group sparsity promoting regularization
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
An affine scaling methodology for best basis selection
IEEE Transactions on Signal Processing
Robust classifiers for data reduced via random projections
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Sparse neighbor representation for classification
Pattern Recognition Letters
Hi-index | 0.10 |
This work proposes a new classifier based on the assumption that the training samples of a particular class approximately form a linear basis for any new test sample belonging to that class. This is not a new assumption; two previous works namely Sparse Classifier (Yang et al., 2009) and Group Sparse Classifier (Majumdar and Ward, 2009a) has been built upon it. However both the previous works are fraught with certain shortcomings. The two optimization algorithms proposed in the two previous works do not capture all the implications of the said assumption perfectly. This work, accounts for all the intricacies of the said assumption and consequently requires solving a new non-convex optimization problem. We have developed an elegant approach to solve the optimization problem based on Iterative Reweighted Least Squares method. Our classifier is very flexible; the previous classifiers are actually two special cases of the said classifier. The proposed classifier has been rigorously compared against the previous ones stemming from the said assumption on benchmark classification databases. The results indicate that in most cases the proposed classifier is significantly better than the previous ones and in the worst case the proposed classifier is as good as the previous ones.