Relative Loss Bounds for On-Line Density Estimation with the Exponential Family of Distributions

  • Authors:
  • Katy S. Azoury;M. K. Warmuth

  • Affiliations:
  • College of Business, San Francisco State University, San Francisco, CA 94132, USA. kazoury@sfsu.edu;Computer Science Department, University of California, Santa Cruz, Santa Cruz, CA 95064, USA. manfred@cse.ucsc.edu

  • Venue:
  • Machine Learning
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider on-line density estimation with a parameterized density from the exponential family. The on-line algorithm receives one example at a time and maintains a parameter that is essentially an average of the past examples. After receiving an example the algorithm incurs a loss, which is the negative log-likelihood of the example with respect to the current parameter of the algorithm. An off-line algorithm can choose the best parameter based on all the examples. We prove bounds on the additional total loss of the on-line algorithm over the total loss of the best off-line parameter. These relative loss bounds hold for an arbitrary sequence of examples. The goal is to design algorithms with the best possible relative loss bounds. We use a Bregman divergence to derive and analyze each algorithm. These divergences are relative entropies between two exponential distributions. We also use our methods to prove relative loss bounds for linear regression.