Transfer learning via multi-view principal component analysis

  • Authors:
  • Yang-Sheng Ji;Jia-Jun Chen;Gang Niu;Lin Shang;Xin-Yu Dai

  • Affiliations:
  • Department of Computer Science and Technology, Nanjing University, Nanjing, China and National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China;Department of Computer Science and Technology, Nanjing University, Nanjing, China and National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China;Department of Computer Science and Technology, Nanjing University, Nanjing, China and National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China;Department of Computer Science and Technology, Nanjing University, Nanjing, China and National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China;Department of Computer Science and Technology, Nanjing University, Nanjing, China and National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China

  • Venue:
  • Journal of Computer Science and Technology - Special issue on natural language processing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Transfer learning aims at leveraging the knowledge in labeled source domains to predict the unlabeled data in a target domain, where the distributions are different in domains. Among various methods for transfer learning, one kind of algorithms focus on the correspondence between bridge features and all the other specific features from different domains, and later conduct transfer learning via the single-view correspondence. However, the single-view correspondence may prevent these algorithms from further improvement due to the problem of incorrect correlation discovery. To tackle this problem, we propose a new method for transfer learning in a multi-view correspondence perspective, which is called Multi-View Principal Component Analysis (MVPCA) approach. MVPCA discovers the correspondence between bridge features representative across all domains and specific features from different domains respectively, and conducts the transfer learning by dimensionality reduction in a multi-view way, which can better depict the knowledge transfer. Experiments show that MVPCA can significantly reduce the cross domain prediction error of a baseline non-transfer method. With multiview correspondence information incorporated to the single-view transfer learning method, MVPCA can further improve the performance of one state-of-the-art single-view method.