Efficient highly over-complete sparse coding using a mixture model

  • Authors:
  • Jianchao Yang;Kai Yu;Thomas Huang

  • Affiliations:
  • Beckman Institute, University of Illinois at Urbana Champaign, IL;NEC Laboratories America, Cupertino, CA;Beckman Institute, University of Illinois at Urbana Champaign, IL

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sparse coding of sensory data has recently attracted notable attention in research of learning useful features from the unlabeled data. Empirical studies show that mapping the data into a significantly higher-dimensional space with sparse coding can lead to superior classification performance. However, computationally it is challenging to learn a set of highly over-complete dictionary bases and to encode the test data with the learned bases. In this paper, we describe a mixture sparse coding model that can produce high-dimensional sparse representations very efficiently. Besides the computational advantage, the model effectively encourages data that are similar to each other to enjoy similar sparse representations. What's more, the proposed model can be regarded as an approximation to the recently proposed local coordinate coding (LCC), which states that sparse coding can approximately learn the nonlinear manifold of the sensory data in a locally linear manner. Therefore, the feature learned by the mixture sparse coding model works pretty well with linear classifiers. We apply the proposed model to PASCAL VOC 2007 and 2009 datasets for the classification task, both achieving state-of-the-art performances.