Random generation of combinatorial structures from a uniform
Theoretical Computer Science
An optimal approximation algorithm for Bayesian inference
Artificial Intelligence
An Optimal Algorithm for Monte Carlo Estimation
SIAM Journal on Computing
Proceedings of the 32nd conference on Winter simulation
Simple Monte Carlo and the Metropolis algorithm
Journal of Complexity
Efficient suboptimal rare-event simulation
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
Explicit error bounds for lazy reversible Markov chain Monte Carlo
Journal of Complexity
Journal of Artificial Intelligence Research
Rare Event Simulation using Monte Carlo Methods
Rare Event Simulation using Monte Carlo Methods
Asymptotic robustness of estimators in rare-event simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Rigorous confidence bounds for MCMC under a geometric drift condition
Journal of Complexity
Confidence inference in bayesian networks
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Adaptive importance sampling [digital communication]
IEEE Journal on Selected Areas in Communications
Hi-index | 0.00 |
We consider Monte Carlo algorithms for computing an integral @q=@!fd@p which is positive but can be arbitrarily close to 0. It is assumed that we can generate a sequence X"n of uniformly bounded random variables with expectation @q. Estimator @q@?=@q@?(X"1,X"2,...,X"N) is called an (@e,@a)-approximation if it has fixed relative precision @e at a given level of confidence 1-@a, that is it satisfies P(|@q@?-@q|@?@e@q)=1-@a for all problem instances. Such an estimator exists only if we allow the sample size N to be random and adaptively chosen. We propose an (@e,@a)-approximation for which the cost, that is the expected number of samples, satisfies EN~2ln@a^-^1/(@q@e^2) for @e-0 and @a-0. The main tool in the analysis is a new exponential inequality for randomly stopped sums. We also derive a lower bound on the worst case complexity of the (@e,@a)-approximation. This bound behaves as 2ln@a^-^1/(@q@e^2). Thus the worst case efficiency of our algorithm, understood as the ratio of the lower bound to the expected sample size EN, approaches 1 if @e-0 and @a-0. An L^2 analogue is to find @q@? such that E(@q@?-@q)^2@?@e^2@q^2. We derive an algorithm with the expected cost EN~1/(@q@e^2) for @e-0. To this end, we prove an inequality for the mean square error of randomly stopped sums. A corresponding lower bound also behaves as 1/(@q@e^2). The worst case efficiency of our algorithm, in the L^2 sense, approaches 1 if @e-0.