Dyadic transfer learning for cross-domain image classification

  • Authors:
  • Hua Wang; Feiping Nie;Heng Huang;Chris Ding

  • Affiliations:
  • Department of Computer Science and Engineering, University of Texas at Arlington, 76019, USA;Department of Computer Science and Engineering, University of Texas at Arlington, 76019, USA;Department of Computer Science and Engineering, University of Texas at Arlington, 76019, USA;Department of Computer Science and Engineering, University of Texas at Arlington, 76019, USA

  • Venue:
  • ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Because manual image annotation is both expensive and labor intensive, in practice we often do not have sufficient labeled images to train an effective classifier for the new image classification tasks. Although multiple labeled image data sets are publicly available for a number of computer vision tasks, a simple mixture of them cannot achieve good performance due to the heterogeneous properties and structures between different data sets. In this paper, we propose a novel nonnegative matrix tri-factorization based transfer learning framework, called as Dyadic Knowledge Transfer (DKT) approach, to transfer cross-domain image knowledge for the new computer vision tasks, such as classifications. An efficient iterative algorithm to solve the proposed optimization problem is introduced. We perform the proposed approach on two benchmark image data sets to simulate the real world cross-domain image classification tasks. Promising experimental results demonstrate the effectiveness of the proposed approach.