Approximation and radial-basis-function networks
Neural Computation
Rate of approximation results motivated by robust neural network learning
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Approximation and learning of convex superpositions
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Complexity of Gaussian-radial-basis networks approximating smooth functions
Journal of Complexity
Accuracy of suboptimal solutions to kernel principal component analysis
Computational Optimization and Applications
Approximate Minimization of the Regularized Expected Error over Kernel Models
Mathematics of Operations Research
Journal of Computational and Applied Mathematics
On the exponential convergence of matching pursuits in quasi-incoherent dictionaries
IEEE Transactions on Information Theory
Geometric Upper Bounds on Rates of Variable-Basis Approximation
IEEE Transactions on Information Theory
A neural network approach for solving Fredholm integral equations of the second kind
Neural Computing and Applications
Hi-index | 0.00 |
Approximation of solutions of integral equations by networks with kernel units is investigated theoretically. There are derived upper bounds on speed of decrease of errors in approximation of solutions of Fredholm integral equations by kernel networks with increasing numbers of units. The estimates are obtained for Gaussian and degenerate kernels.