Ten lectures on wavelets
Sphere packing numbers for subsets of the Boolean n-cube with bounded Vapnik-Chervonenkis dimension
Journal of Combinatorial Theory Series A
The nature of statistical learning theory
The nature of statistical learning theory
On the value of partial information for learning from examples
Journal of Complexity
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
A Unified Framework for Regularization Networks and Support Vector Machines
A Unified Framework for Regularization Networks and Support Vector Machines
Lower bounds for multivariate approximation by affine-invariant dictionaries
IEEE Transactions on Information Theory
On the optimality of neural-network approximation using incremental algorithms
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We consider the problem of Learning Neural Networks from samples. The sample size which is sufficient for obtaining the almost-optimal stochastic approximation of function classes is obtained. In the terms of the accuracy confidence function, we show that the least-squares estimator is almost-optimal for the problem. These results can be used to solve Smale's network problem.