Training Hierarchical Feed-Forward Visual Recognition Models Using Transfer Learning from Pseudo-Tasks

  • Authors:
  • Amr Ahmed;Kai Yu;Wei Xu;Yihong Gong;Eric Xing

  • Affiliations:
  • School of Computer Science, Carnegie Mellon University, ;NEC Labs America, Cupertino, CA 95014;NEC Labs America, Cupertino, CA 95014;NEC Labs America, Cupertino, CA 95014;School of Computer Science, Carnegie Mellon University,

  • Venue:
  • ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Building visual recognition models that adapt across different domains is a challenging task for computer vision. While feature-learning machines in the form of hierarchial feed-forward models (e.g., convolutional neural networks) showed promise in this direction, they are still difficult to train especially when few training examples are available. In this paper, we present a framework for training hierarchical feed-forward models for visual recognition, using transfer learning from pseudo tasks. These pseudo tasks are automatically constructed from data without supervision and comprise a set of simple pattern-matching operations. We show that these pseudo tasks induce an informative inverse-Wishart prior on the functional behavior of the network, offering an effective way to incorporate useful prior knowledge into the network training. In addition to being extremely simple to implement, and adaptable across different domains with little or no extra tuning, our approach achieves promising results on challenging visual recognition tasks, including object recognition, gender recognition, and ethnicity recognition.