Beyond cross-domain learning: Multiple-domain nonnegative matrix factorization

  • Authors:
  • Jim Jing-Yan Wang;Xin Gao

  • Affiliations:
  • -;-

  • Venue:
  • Engineering Applications of Artificial Intelligence
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional cross-domain learning methods transfer learning from a source domain to a target domain. In this paper, we propose the multiple-domain learning problem for several equally treated domains. The multiple-domain learning problem assumes that samples from different domains have different distributions, but share the same feature and class label spaces. Each domain could be a target domain, while also be a source domain for other domains. A novel multiple-domain representation method is proposed for the multiple-domain learning problem. This method is based on nonnegative matrix factorization (NMF), and tries to learn a basis matrix and coding vectors for samples, so that the domain distribution mismatch among different domains will be reduced under an extended variation of the maximum mean discrepancy (MMD) criterion. The novel algorithm - multiple-domain NMF (MDNMF) - was evaluated on two challenging multiple-domain learning problems - multiple user spam email detection and multiple-domain glioma diagnosis. The effectiveness of the proposed algorithm is experimentally verified.