Real and complex analysis, 3rd ed.
Real and complex analysis, 3rd ed.
Geometry and topology of continuous best and near best approximations
Journal of Approximation Theory
Best approximation by Heaviside perceptron networks
Neural Networks
Approximating networks and extended Ritz method for the solution of functional optimization problems
Journal of Optimization Theory and Applications
Neural Networks for Combinatorial Optimization: a Review of More Than a Decade of Research
INFORMS Journal on Computing
Error Estimates for Approximate Optimization by the Extended Ritz Method
SIAM Journal on Optimization
Journal of Approximation Theory
Complexity of Gaussian-radial-basis networks approximating smooth functions
Journal of Complexity
An integral upper bound for neural network approximation
Neural Computation
Model Complexity of Neural Networks and Integral Transforms
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Some comparisons of model complexity in linear and neural-network approximation
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Bounds on rates of variable-basis and neural-network approximation
IEEE Transactions on Information Theory
Comparison of worst case errors in linear and neural network approximation
IEEE Transactions on Information Theory
On the exponential convergence of matching pursuits in quasi-incoherent dictionaries
IEEE Transactions on Information Theory
Geometric Upper Bounds on Rates of Variable-Basis Approximation
IEEE Transactions on Information Theory
On a Variational Norm Tailored to Variable-Basis Approximation Schemes
IEEE Transactions on Information Theory
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Bounds for approximate solutions of Fredholm integral equations using kernel networks
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Some comparisons of networks with radial and kernel units
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Hi-index | 0.00 |
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.