Optimally Adaptive Integration of Univariate Lipschitz Functions

  • Authors:
  • Ilya Baran;Erik D. Demaine;Dmitriy A. Katz

  • Affiliations:
  • MIT Computer Science and Artificial Intelligence Laboratory, 32 Vassar Street, 02139, Cambridge, MA, USA;MIT Computer Science and Artificial Intelligence Laboratory, 32 Vassar Street, 02139, Cambridge, MA, USA;Massachusetts Institute of Technology, Sloan School of Management, 50 Memorial Drive, 02142, Cambridge, MA, USA

  • Venue:
  • Algorithmica
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of approximately integrating a Lipschitz function f (with a known Lipschitz constant) over an interval. The goal is to achieve an additive error of at most ε using as few samples of f as possible. We use the adaptive framework: on all problem instances an adaptive algorithm should perform almost as well as the best possible algorithm tuned for the particular problem instance. We distinguish between ${\rm DOPT}$ and ${\rm ROPT}$, the performances of the best possible deterministic and randomized algorithms, respectively. We give a deterministic algorithm that uses $O({\rm DOPT}(f,\epsilon)\cdot\log(\epsilon^{-1}/{\rm DOPT}(f,\epsilon)))$ samples and show that an asymptotically better algorithm is impossible. However, any deterministic algorithm requires $\Omega({\rm ROPT}(f,\epsilon)^{2})$ samples on some problem instance. By combining a deterministic adaptive algorithm and Monte Carlo sampling with variance reduction, we give an algorithm that uses at most $O({\rm ROPT}(f,\epsilon)^{4/3}+{\rm ROPT}(f,\epsilon)\cdot\log(1/\epsilon))$ samples. We also show that any algorithm requires $\Omega({\rm ROPT}(f,\epsilon)^{4/3}+{\rm ROPT}(f,\epsilon)\cdot\log(1/\epsilon))$ samples in expectation on some problem instance (f,ε), which proves that our algorithm is optimal.