Multi-label ensemble learning

  • Authors:
  • Chuan Shi;Xiangnan Kong;Philip S. Yu;Bai Wang

  • Affiliations:
  • School of Computer, Beijing University of Posts and Telecommunications, Beijing, China;Department of Computer Science, University of Illinois at Chicago, IL;Department of Computer Science, University of Illinois at Chicago, IL;School of Computer, Beijing University of Posts and Telecommunications, Beijing, China

  • Venue:
  • ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multi-label learning aims at predicting potentially multiple labels for a given instance. Conventional multi-label learning approaches focus on exploiting the label correlations to improve the accuracy of the learner by building an individual multi-label learner or a combined learner based upon a group of single-label learners. However, the generalization ability of such individual learner can be weak. It is well known that ensemble learning can effectively improve the generalization ability of learning systems by constructing multiple base learners and the performance of an ensemble is related to the both accuracy and diversity of base learners. In this paper, we study the problem of multilabel ensemble learning. Specifically, we aim at improving the generalization ability of multi-label learning systems by constructing a group of multilabel base learners which are both accurate and diverse. We propose a novel solution, called EnML, to effectively augment the accuracy as well as the diversity of multi-label base learners. In detail, we design two objective functions to evaluate the accuracy and diversity of multilabel base learners, respectively, and EnML simultaneously optimizes these two objectives with an evolutionary multi-objective optimization method. Experiments on real-world multi-label learning tasks validate the effectiveness of our approach against other well-established methods.