Optimal solution of ordinary differential equations
Journal of Complexity
Information-based complexity
Solving ordinary differential equations I (2nd revised. ed.): nonstiff problems
Solving ordinary differential equations I (2nd revised. ed.): nonstiff problems
Complexity of nonlinear two-point boundary-value problems
Journal of Complexity
Randomized and quantum algorithms yield a speed-up for initial-value problems
Journal of Complexity
Improved bounds on the randomized and quantum complexity of initial-value problems
Journal of Complexity
Joint factor and kinetic analysis of dynamic FDOPA PET scans of brain cancer patients
MICCAI'10 Proceedings of the 13th international conference on Medical image computing and computer-assisted intervention: Part II
On the randomized solution of initial value problems
Journal of Complexity
Hi-index | 0.00 |
We study the worst-case ε-complexity of nonlinear initial-value problems u(k)(x)=g (x, u (x), u'(x),..., u(q)(x)), x ∈ [a, b], 0 ≤qk, with given initial conditions. We assume that function g has r(r ≥ 1) continuous bounded partial derivatives. We consider two types of information about g: standard information defined by values of g or its partial derivatives, and linear information defined by the values of linear functionals on g. For standard information, we show that the worst-case complexity is Θ ((1/ε)1/r), which is independent of k and q. By defining an algorithm using integral information, we show that the complexity is O((1/ε)1/(r+k-q)) if linear information is used. Hence, linear information is more powerful than standard information. For q = 0 for instance, the complexity decreases from Θ((1/ε)1/r) to O((1/ε)1/(r+k)). We also give a lower bound on the ε-complexity for linear information. We show that the complexity is Ω((1/ε)1/(r+k)), which means that upper and lower bounds match for q = 0. The gap for the remaining values of q is an open problem.