Collaborative Dual-PLSA: mining distinction and commonality across multiple domains for text classification

  • Authors:
  • Fuzhen Zhuang;Ping Luo;Zhiyong Shen;Qing He;Yuhong Xiong;Zhongzhi Shi;Hui Xiong

  • Affiliations:
  • Chinese Academy of Sciences, Beijing, China;Hewlett Packard Labs China, Beijing, China;Hewlett Packard Labs China, Beijing, China;Chinese Academy of Sciences, Beijing, China;Innovation Works, Beijing, China;Chinese Academy of Sciences, Beijing, China;Rutgers University, New Brunswick, NJ, USA

  • Venue:
  • CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The distribution difference among multiple data domains has been considered for the cross-domain text classification problem. In this study, we show two new observations along this line. First, the data distribution difference may come from the fact that different domains use different key words to express the same concept. Second, the association between this conceptual feature and the document class may be stable across domains. These two issues are actually the distinction and commonality across data domains. Inspired by the above observations, we propose a generative statistical model, named Collaborative Dual-PLSA (CD-PLSA), to simultaneously capture both the domain distinction and commonality among multiple domains. Different from Probabilistic Latent Semantic Analysis (PLSA) with only one latent variable, the proposed model has two latent factors y and z, corresponding to word concept and document class respectively. The shared commonality intertwines with the distinctions over multiple domains, and is also used as the bridge for knowledge transformation. We exploit an Expectation Maximization (EM) algorithm to learn this model, and also propose its distributed version to handle the situation where the data domains are geographically separated from each other. Finally, we conduct extensive experiments over hundreds of classification tasks with multiple source domains and multiple target domains to validate the superiority of the proposed CD-PLSA model over existing state-of-the-art methods of supervised and transfer learning. In particular, we show that CD-PLSA is more tolerant of distribution differences.