Self-Improving Algorithms

  • Authors:
  • Nir Ailon;Bernard Chazelle;Kenneth L. Clarkson;Ding Liu;Wolfgang Mulzer;C. Seshadhri

  • Affiliations:
  • nailon@google.com;chazelle@cs.princeton.edu and dingliu@cs.princeton.edu and wmilzer@cs.princeton.edu;klclarks@us.ibm.com and csesha@gmail.com;-;-;-

  • Venue:
  • SIAM Journal on Computing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an unknown input distribution $\mathcal{D}$. We assume here that $\mathcal{D}$ is of product type. More precisely, suppose that we need to process a sequence $I_1,I_2,\ldots$ of inputs $I=(x_1,x_2,\ldots,x_n)$ of some fixed length $n$, where each $x_i$ is drawn independently from some arbitrary, unknown distribution $\mathcal{D}_i$. The goal is to design an algorithm for these inputs so that eventually the expected running time will be optimal for the input distribution $\mathcal{D}=\prod_i\mathcal{D}_i$. We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.