An Optimal Algorithm for Monte Carlo Estimation

  • Authors:
  • Paul Dagum;Richard Karp;Michael Luby;Sheldon Ross

  • Affiliations:
  • -;-;-;-

  • Venue:
  • SIAM Journal on Computing
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

A typical approach to estimate an unknown quantity $\mu$ is to design an experiment that produces a random variable Z, distributed in [0,1] with E[Z]=\mu$, run this experiment independently a number of times, and use the average of the outcomes as the estimate. In this paper, we consider the case when no a priori information about Z is known except that is distributed in [0,1]. We describe an approximation algorithm ${\cal A}{\cal A}$ which, given $\epsilon$ and $\delta$, when running independent experiments with respect to any Z, produces an estimate that is within a factor $1+\epsilon$ of $\mu$ with probability at least $1-\delta$. We prove that the expected number of experiments run by ${\cal A}{\cal A}$ (which depends on Z) is optimal to within a constant factor {for every} Z.