Supervised learning and co-training

  • Authors:
  • Malte Darnstädt;Hans Ulrich Simon;Balázs Szörényi

  • Affiliations:
  • Fakultät für Mathematik, Ruhr-Universität Bochum, Bochum, Germany;Fakultät für Mathematik, Ruhr-Universität Bochum, Bochum, Germany;Hungarian Academy of Sciences and University of Szeged, Research Group on Artificial Intelligence, Szeged, Hungary

  • Venue:
  • ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Co-training under the Conditional Independence Assumption is among the models which demonstrate how radically the need for labeled data can be reduced if a huge amount of unlabeled data is available. In this paper, we explore how much credit for this saving must be assigned solely to the extra-assumptions underlying the Co-training model. To this end, we compute general (almost tight) upper and lower bounds on the sample size needed to achieve the success criterion of PAC-learning within the model of Co-training under the Conditional Independence Assumption in a purely supervised setting. The upper bounds lie significantly below the lower bounds for PAC-learning without Cotraining. Thus, Co-training saves labeled data even when not combined with unlabeled data. On the other hand, the saving is much less radical than the known savings in the semi-supervised setting.