Classification via group sparsity promoting regularization

  • Authors:
  • A. Majumdar;R. K. Ward

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of British Columbia, Canada;Department of Electrical and Computer Engineering, University of British Columbia, Canada

  • Venue:
  • ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently a new classification assumption was proposed in [1]. It assumed that the training samples of a particular class approximately form a linear basis for any test sample belonging to that class. The classification algorithm in [1] was based on the idea that all the correlated training samples belonging to the correct class are used to represent the test sample. The Lasso regularization was proposed to select the representative training samples from the entire training set (consisting of all the training samples). Lasso however tends to select a single sample from a group of correlated training samples and thus does not promote the representation of the test sample in terms of all the training samples from the correct group. To overcome this problem, we propose two alternate regularization methods, Elastic Net and Sum-Over-l2-norm. Both these regularization methods favor the selection of multiple correlated training samples to represent the test sample. Experimental results on benchmark datasets show that our regularization methods give better recognition results compared to [1].