Convergence analysis of convex incremental neural networks
Annals of Mathematics and Artificial Intelligence
Geometric rates of approximation by neural networks
SOFSEM'08 Proceedings of the 34th conference on Current trends in theory and practice of computer science
A neural network of smooth hinge functions
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We give upper bounds rates of approximation of a set of functions from a real Hilbert space, using convex greedy iterations. The approximation method was originally proposed and analyzed by Jones (1992). Barron (1993) applied the method to the set of functions computable by single-hidden-layer feedforward neural networks. It was shown that the networks achieve an integrated squared error of order O(1/n), where n is the number of iterations, or equivalently, nodes in the network. Assuming that the functions to be approximated satisfy the so-called δ-angular condition, we show that the corresponding rate of approximation of order O(qn) is achievable, where 0 ⩽ q < 1. Therefore, for the set of functions considered, the reported geometrical rate of approximation is an improvement of Maurey-Jones-Barron's upper bound result. In the case of orthonormal convex greedy approximations, the δ-angular condition is shown to be equivalent to the geometrically decaying expansion coefficients. In finite dimensions the δ-angular condition is proven to take place for a wide class of functions