Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
Regularization theory and neural networks architectures
Neural Computation
When are quasi-Monte Carlo algorithms efficient for high dimensional integrals?
Journal of Complexity
Error bounds for approximation with neural networks
Journal of Approximation Theory
Bounds for the weighted Lp discrepancy and tractability of integration
Journal of Complexity
Nonlinear function approximation: Computing smooth solutions with an adaptive greedy algorithm
Journal of Approximation Theory
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Hi-index | 0.00 |
In learning theory the goal is to reconstruct a function defined on some (typically high dimensional) domain @W, when only noisy values of this function at a sparse, discrete subset @w@?@W are available. In this work we use Koksma-Hlawka type estimates to obtain deterministic bounds on the so-called generalization error. The resulting estimates show that the generalization error tends to zero when the noise in the measurements tends to zero and the number of sampling points tends to infinity sufficiently fast.