Predicting (0, 1)-functions on randomly drawn points

  • Authors:
  • D. Haussler;N. Littlestone;M. K. Warmuth

  • Affiliations:
  • Dept. of Comput.&Inf. Sci., California Univ., Santa Cruz, CA, USA;Dept. of Comput.&Inf. Sci., California Univ., Santa Cruz, CA, USA;Dept. of Comput.&Inf. Sci., California Univ., Santa Cruz, CA, USA

  • Venue:
  • SFCS '88 Proceedings of the 29th Annual Symposium on Foundations of Computer Science
  • Year:
  • 1988

Quantified Score

Hi-index 0.00

Visualization

Abstract

The authors consider the problem of predicting (0, 1)-valued functions on R/sup n/ and smaller domains, based on their values on randomly drawn points. Their model is related to L.G. Valiant's learnability model (1984), but does not require the hypotheses used for prediction to be represented in any specified form. The authors first disregard computational complexity and show how to construct prediction strategies that are optimal to within a constant factor for any reasonable class F of target functions. These prediction strategies use the 1-inclusion graph structure from N. Alon et al.'s work on geometric range queries (1987) to minimize the probability of incorrect prediction. They then turn to computationally efficient algorithms. For indicator functions of axis-parallel rectangles and halfspaces in R/sup n/, they demonstrate how their techniques can be applied to construct computational efficient prediction strategies that are optimal to within a constant factor. They compare the general performance of prediction strategies derived by their method to those derived from existing methods in Valiant's learnability theory.