Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
Component-by-component construction of good lattice rules
Mathematics of Computation
The existence of good extensible rank-1 lattices
Journal of Complexity
Efficient sampling in approximate dynamic programming algorithms
Computational Optimization and Applications
Deterministic design for neural network learning: an approach based on discrepancy
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this brief, the use of lattice point sets (LPSs) is investigated in the context of general learning problems (including function estimation and dynamic optimization), in the case where the classic empirical risk minimization (ERM) principle is considered and there is freedom to choose the sampling points of the input space. Here it is proved that convergence of the ERM principle is guaranteed when LPSs are employed as training sets for the learning procedure, yielding up to a superlinear convergence rate under some regularity hypotheses on the involved functions. Preliminary simulation results are also provided.