Separating models of learning from correlated and uncorrelated data

  • Authors:
  • Ariel Elbaz;Homin K. Lee;Rocco A. Servedio;Andrew Wan

  • Affiliations:
  • Department of Computer Science, Columbia University;Department of Computer Science, Columbia University;Department of Computer Science, Columbia University;Department of Computer Science, Columbia University

  • Venue:
  • COLT'05 Proceedings of the 18th annual conference on Learning Theory
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider a natural framework of learning from correlated data, in which successive examples used for learning are generated according to a random walk over the space of possible examples. Previous research has suggested that the Random Walk model is more powerful than comparable standard models of learning from independent examples, by exhibiting learning algorithms in the Random Walk framework that have no known counterparts in the standard model. We give strong evidence that the Random Walk model is indeed more powerful than the standard model, by showing that if any cryptographic one-way function exists (a universally held belief in public key cryptography), then there is a class of functions that can be learned efficiently in the Random Walk setting but not in the standard setting where all examples are independent.