The role of dictionary learning on sparse representation-based classification

  • Authors:
  • Soheil Shafiee;Farhad Kamangar;Vassilis Athitsos;Junzhou Huang

  • Affiliations:
  • University of Texas at Arlington;University of Texas at Arlington;University of Texas at Arlington;University of Texas at Arlington

  • Venue:
  • Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper analyzes the role of dictionary selection in Sparse Representation-based Classification (SRC). While SRC introduces interesting results in the field of classification, its performance is highly limited by the number of training samples to form the classification matrix. Different studies addressed this issue by using a more compact representation of the training data in order to achieve higher classification speed and accuracy. Representative selection methods which are analyzed in this paper include Metaface dictionary learning, Fisher Discriminative Dictionary Learning (FDDL), Sparse Modeling Representative Selection (SMRS), and random selection of the training samples. The first two methods build their own dictionaries via an optimization process while the other two methods select the representatives directly from the original training samples. These methods, along with the original method which uses all training samples to form the classification matrix, were examined on two face datasets and one digit dataset. The role of feature extraction was also studied using two dimensionality reduction methods, down-sampling and random projection. The results show that the FDDL method leads to the best classification accuracy followed by the SMRS method as the second best. On the other hand, the SMRS method requires a much smaller learning time which makes it more appropriate for dynamic situations where the dictionary is regularly updated with new samples. The accuracy of the Metaface dictionary learning method was specifically less than the other two methods. As expected, using all the training samples as the dictionary resulted in the best recognition rates in all the datasets but the classification times for this approach were far larger than the required time using any of the three dictionary learning methods.