Building topographic subspace model with transfer learning for sparse representation

  • Authors:
  • Yang Liu;Jian Cheng;Changsheng Xu;Hanqing Lu

  • Affiliations:
  • National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we propose a topographic subspace learning algorithm, named key-coding learning, which utilizes irrelevant unlabeled auxiliary data to facilitate image classification and retrieval tasks. It is worth noticing that we do not need to assume the auxiliary data follows the same class labels or generative distribution as the target training data. Firstly, the subspace model is learnt from enormous scale- and rotation-invariant SURF descriptors extracted from auxiliary and training images, which makes model insensitive to geometric and photometric image transformation. Then the bases of model are pooled by clustering to generate topographic basis banks. We provide insights to show that the topographic model is highly biologically plausible in simulating the complex cells in the visual cortex. Finally we generate the succinct sparse representations by mapping target data into this topographic model. Due to the capability of transferring knowledge, the proposed topographic subspace model can effectively address insufficient training data problem for image classification and is also helpful for generating discriminative features for image retrieval. Intensive experiments are conducted on three image datasets to evaluate the performance of our proposed model, the experimental results are encouraging and promising.