Exploiting label dependencies for improved sample complexity

  • Authors:
  • Lena Chekina;Dan Gutfreund;Aryeh Kontorovich;Lior Rokach;Bracha Shapira

  • Affiliations:
  • Department of Information Systems Engineering and Telekom Innovation Laboratories, Ben-Gurion University of the Negev, Beer-Sheva, Israel 84105;IBM Research, Haifa, Israel;Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel 84105;Department of Information Systems Engineering and Telekom Innovation Laboratories, Ben-Gurion University of the Negev, Beer-Sheva, Israel 84105;Department of Information Systems Engineering and Telekom Innovation Laboratories, Ben-Gurion University of the Negev, Beer-Sheva, Israel 84105

  • Venue:
  • Machine Learning
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multi-label classification exhibits several challenges not present in the binary case. The labels may be interdependent, so that the presence of a certain label affects the probability of other labels' presence. Thus, exploiting dependencies among the labels could be beneficial for the classifier's predictive performance. Surprisingly, only a few of the existing algorithms address this issue directly by identifying dependent labels explicitly from the dataset. In this paper we propose new approaches for identifying and modeling existing dependencies between labels. One principal contribution of this work is a theoretical confirmation of the reduction in sample complexity that is gained from unconditional dependence. Additionally, we develop methods for identifying conditionally and unconditionally dependent label pairs; clustering them into several mutually exclusive subsets; and finally, performing multi-label classification incorporating the discovered dependencies. We compare these two notions of label dependence (conditional and unconditional) and evaluate their performance on various benchmark and artificial datasets. We also compare and analyze labels identified as dependent by each of the methods. Moreover, we define an ensemble framework for the new methods and compare it to existing ensemble methods. An empirical comparison of the new approaches to existing base-line and state-of-the-art methods on 12 various benchmark datasets demonstrates that in many cases the proposed single-classifier and ensemble methods outperform many multi-label classification algorithms. Perhaps surprisingly, we discover that the weaker notion of unconditional dependence plays the decisive role.