The Consistency of Greedy Algorithms for Classification
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
On the rate of convergence of regularized boosting classifiers
The Journal of Machine Learning Research
Approximation by neural networks and learning theory
Journal of Complexity - Special issue: Algorithms and complexity for continuous problems Schloss Dagstuhl, Germany, September 2004
Convergence analysis of convex incremental neural networks
Annals of Mathematics and Artificial Intelligence
Approximation by neural networks and learning theory
Journal of Complexity - Special issue: Algorithms and complexity for continuous problems Schloss Dagstuhl, Germany, September 2004
Hi-index | 0.00 |
The problem of approximating functions by neural networks using incremental algorithms is studied. For functions belonging to a rather general class, characterized by certain smoothness properties with respect to the L2 norm, we compute upper bounds on the approximation error where error is measured by the Lq norm, 1⩽q⩽∞. These results extend previous work, applicable in the case q=2, and provide an explicit algorithm to achieve the derived approximation error rate. In the range q⩽2 near-optimal rates of convergence are demonstrated. A gap remains, however, with respect to a recently established lower bound in the case q>2, although the rates achieved are provably better than those obtained by optimal linear approximation. Extensions of the results from the L2 norm to Lp are also discussed. A further interesting conclusion from our results is that no loss of generality is suffered using networks with positive hidden-to-output weights. Moreover, explicit bounds on the size of the hidden-to-output weights are established, which are sufficient to guarantee the established convergence rates