Learning with weak views based on dependence maximization dimensionality reduction

  • Authors:
  • Qing Zhang;De-Chuan Zhan;Yilong Yin

  • Affiliations:
  • School of Computer Science and Technology, Shandong University, Jinan, China,National Key Laboratory for Novel Software Technology, Nanjing, China;National Key Laboratory for Novel Software Technology, Nanjing, China,Shenzhen Key Laboratory of High Performance Data Mining, Shenzhen, China;School of Computer Science and Technology, Shandong University, Jinan, China

  • Venue:
  • IScIDE'12 Proceedings of the third Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Large number of applications involving multiple views of data are coming into use, e.g., reporting news on the Internet by both text and video, identifying a person by both fingerprints and face images, etc. Meanwhile, labeling these data needs expensive efforts and thus most data are left unlabeled in many applications. Co-training can exploit the information of unlabeled data in multi-view scenarios. However, the assumptions of co-training, i.e., sufficient and redundant are so strong to be held in most situations. It is notable that different views often have different discrimination ability, while views with strong discrimination ability are usually hard to be obtained. As a consequence, it is a promising way to exploit unlabeled multi-view training data to integrate the information of the strong view into the weak view so that the weak view's discrimination ability can get improved. Only classifiers trained on the weak view will be used to do the classification tasks afterwards. In this paper, based on dependence maximization, we propose a framework to inject the information of strong views into weak ones. Experiments show that the framework outperforms co-training in improving the performances of classifiers trained on the weak view.