On-line learning with malicious noise and the closure algorithm

  • Authors:
  • Peter Auer;Nicolò Cesa-Bianchi

  • Affiliations:
  • IGI, Graz University of Technology, Klosterwiesgasse 32/2, A‐8010 Graz, Austria E-mail: pauer@igi.tu‐graz.ac.at;DSI, University of Milan, Via Comelico 39, I‐20135 Milano, Italy E-mail: cesabian@dsi.unimi.it

  • Venue:
  • Annals of Mathematics and Artificial Intelligence
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

We investigate a variant of the on‐line learning model for classes of \{0,1\}‐valued functions (concepts) in which the labels of a certain amount of the input instances are corrupted by adversarial noise. We propose an extension of a general learning strategy, known as “Closure Algorithm”, to this noise model, and show a worst‐case mistake bound of m + (d+1)K for learning an arbitrary intersection‐closed concept class \mathcal{C}, where K is the number of noisy labels, d is a combinatorial parameter measuring \mathcal{C}’s complexity, and m is the worst‐case mistake bound of the Closure Algorithm for learning \mathcal{C} in the noise‐free model. For several concept classes our extended Closure Algorithm is efficient and can tolerate a noise rate up to the information‐theoretic upper bound. Finally, we show how to efficiently turn any algorithm for the on‐line noise model into a learning algorithm for the PAC model with malicious noise.