Diverse reduct subspaces based co-training for partially labeled data

  • Authors:
  • Duoqian Miao;Can Gao;Nan Zhang;Zhifei Zhang

  • Affiliations:
  • Department of Computer Science and Technology, Tongji University, Shanghai 201804, PR China and The Key Laboratory of “Embedded System and Service Computing”, Ministry of Education, Sh ...;Department of Computer Science and Technology, Tongji University, Shanghai 201804, PR China and The Key Laboratory of “Embedded System and Service Computing”, Ministry of Education, Sh ...;Department of Computer Science and Technology, Tongji University, Shanghai 201804, PR China and The Key Laboratory of “Embedded System and Service Computing”, Ministry of Education, Sh ...;Department of Computer Science and Technology, Tongji University, Shanghai 201804, PR China and The Key Laboratory of “Embedded System and Service Computing”, Ministry of Education, Sh ...

  • Venue:
  • International Journal of Approximate Reasoning
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Rough set theory is an effective supervised learning model for labeled data. However, it is often the case that practical problems involve both labeled and unlabeled data, which is outside the realm of traditional rough set theory. In this paper, the problem of attribute reduction for partially labeled data is first studied. With a new definition of discernibility matrix, a Markov blanket based heuristic algorithm is put forward to compute the optimal reduct of partially labeled data. A novel rough co-training model is then proposed, which could capitalize on the unlabeled data to improve the performance of rough classifier learned only from few labeled data. The model employs two diverse reducts of partially labeled data to train its base classifiers on the labeled data, and then makes the base classifiers learn from each other on the unlabeled data iteratively. The classifiers constructed in different reduct subspaces could benefit from their diversity on the unlabeled data and significantly improve the performance of the rough co-training model. Finally, the rough co-training model is theoretically analyzed, and the upper bound on its performance improvement is given. The experimental results show that the proposed model outperforms other representative models in terms of accuracy and even compares favorably with rough classifier trained on all training data labeled.