Large Margin vs. Large Volume in Transductive Learning

  • Authors:
  • Ran El-Yaniv;Dmitry Pechyony;Vladimir Vapnik

  • Affiliations:
  • Computer Science Department, Technion - Israel Institute of Technology, Haifa, Israel 32000;Computer Science Department, Technion - Israel Institute of Technology, Haifa, Israel 32000;NEC Laboratories America, Princeton, NJ, USA 08540

  • Venue:
  • ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We focus on distribution-free transductive learning. In this setting the learning algorithm is given a `full sample' of unlabeled points. Then, a training sample is selected uniformly at random from the full sample and the labels of the training points are revealed. The goal is to predict the labels of the remaining unlabeled points as accurately as possible. The full sample partitions the transductive hypothesis space into a finite number of equivalence classes. All hypotheses in the same equivalence class, generate the same dichotomy of the full sample. We consider a large volumeprinciple, whereby the priority of each equivalence class is proportional to its "volume" in the hypothesis space.