Optimal Monte Carlo integration with fixed relative precision

  • Authors:
  • LesłAw Gajek;Wojciech Niemiro;Piotr Pokarowski

  • Affiliations:
  • Faculty of Mathematics and Computer Science, University of Łód, Poland;Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Toruń, Poland and Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Poland;Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Poland

  • Venue:
  • Journal of Complexity
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider Monte Carlo algorithms for computing an integral @q=@!fd@p which is positive but can be arbitrarily close to 0. It is assumed that we can generate a sequence X"n of uniformly bounded random variables with expectation @q. Estimator @q@?=@q@?(X"1,X"2,...,X"N) is called an (@e,@a)-approximation if it has fixed relative precision @e at a given level of confidence 1-@a, that is it satisfies P(|@q@?-@q|@?@e@q)=1-@a for all problem instances. Such an estimator exists only if we allow the sample size N to be random and adaptively chosen. We propose an (@e,@a)-approximation for which the cost, that is the expected number of samples, satisfies EN~2ln@a^-^1/(@q@e^2) for @e-0 and @a-0. The main tool in the analysis is a new exponential inequality for randomly stopped sums. We also derive a lower bound on the worst case complexity of the (@e,@a)-approximation. This bound behaves as 2ln@a^-^1/(@q@e^2). Thus the worst case efficiency of our algorithm, understood as the ratio of the lower bound to the expected sample size EN, approaches 1 if @e-0 and @a-0. An L^2 analogue is to find @q@? such that E(@q@?-@q)^2@?@e^2@q^2. We derive an algorithm with the expected cost EN~1/(@q@e^2) for @e-0. To this end, we prove an inequality for the mean square error of randomly stopped sums. A corresponding lower bound also behaves as 1/(@q@e^2). The worst case efficiency of our algorithm, in the L^2 sense, approaches 1 if @e-0.