Transfer learning via dimensionality reduction

  • Authors:
  • Sinno Jialin Pan;James T. Kwok;Qiang Yang

  • Affiliations:
  • Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong

  • Venue:
  • AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Transfer learning addresses the problem of how to utilize plenty of labeled data in a source domain to solve related but different problems in a target domain, even when the training and testing problems have different distributions or features. In this paper, we consider transfer learning via dimensionality reduction. To solve this problem, we learn a low-dimensional latent feature space where the distributions between the source domain data and the target domain data are the same or close to each other. Onto this latent feature space, we project the data in related domains where we can apply standard learning algorithms to train classification or regression models. Thus, the latent feature space can be treated as a bridge of transferring knowledge from the source domain to the target domain. The main contribution of our work is that we propose a new dimensionality reduction method to find a latent space, which minimizes the distance between distributions of the data in different domains in a latent space. The effectiveness of our approach to transfer learning is verified by experiments in two real world applications: indoor WiFi localization and binary text classification.