STOC '99 Proceedings of the thirty-first annual ACM symposium on Theory of computing
Sampling algorithms: lower bounds and applications
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Journal of Artificial Intelligence Research
Ranking continuous probabilistic datasets
Proceedings of the VLDB Endowment
On the complexity of solving Markov decision problems
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Optimal Monte Carlo estimation of belief network inference
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Bounded rational search for on-the-fly model checking of LTL properties
FSEN'09 Proceedings of the Third IPM international conference on Fundamentals of Software Engineering
Robust network supercomputing with malicious processes
DISC'06 Proceedings of the 20th international conference on Distributed Computing
On approximation algorithms for data mining applications
Efficient Approximation and Online Algorithms
Hi-index | 0.00 |
A typical approach to estimate an unknown quantity /spl mu/ is to design an experiment that produces a random variable Z distributed in [O,1] with E[Z]=/spl mu/, run this experiment independently a number of times and use the average of the outcomes as the estimate. In this paper, we consider the case when no a priori information about Z is known except that is distributed in [0,1]. We describe an approximation algorithm AA which, given /spl epsiv/ and /spl delta/, when running independent experiments with respect to any Z, produces an estimate that is within a factor 1+/spl epsiv/ of /spl mu/ with probability at least 1-/spl delta/. We prove that the expected number of experiments ran by AA (which depends on Z) is optimal to within a constant factor for every Z.