Cross-Database transfer learning via learnable and discriminant error-correcting output codes

  • Authors:
  • Feng-Ju Chang;Yen-Yu Lin;Ming-Fang Weng

  • Affiliations:
  • Research Center for Information Technology Innovation, Academia Sinica, Taiwan;Research Center for Information Technology Innovation, Academia Sinica, Taiwan;Institute of Information Science, Academia Sinica, Taiwan

  • Venue:
  • ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a transfer learning approach that transfers knowledge across two multi-class, unconstrained domains (source and target), and accomplishes object recognition with few training samples in the target domain. Unlike most of previous work, we make no assumption about the relatedness of these two domains. Namely, data of the two domains can be from different databases and of distinct categories. To overcome the domain variations, we propose to learn a set of commonly-shared and discriminant attributes in form of error-correcting output codes. Upon each of attributes, the unrelated, multi-class recognition tasks of the two domains are transformed into correlative, binary-class ones. The extra source knowledge can alleviate the high risk of overfitting caused by the lack of training data in the target domain. Our approach is evaluated on several benchmark datasets, and leads to about 40% relative improvement in accuracy when only one training sample is available.