Online learning of linear classifiers

  • Authors:
  • Jyrki Kivinen

  • Affiliations:
  • Research School of Information Sciences and Engineering, Australian National University, Canberra, ACT 0200, Australia

  • Venue:
  • Advanced lectures on machine learning
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper surveys some basic techniques and recent results related to online learning. Our focus is on linear classification. The most familiar algorithm for this task is the perceptron. We explain the perceptron algorithm and its convergence proof as an instance of a generic method based on Bregman divergences. This leads to a more general algorithm known as the p-norm perceptron. We give the proof for generalizing the perceptron convergence theorem for the p-norm perceptron and the non-separable case. We also show how regularization, again based on Bregman divergences, can make an online algorithm more robust against target movement.