Nonlinear statistical models
Multilayer feedforward networks are universal approximators
Neural Networks
Hybrid learning of mapping and its Jacobian in multilayer neural networks
Neural Computation
Neural Network Method for Solving Partial Differential Equations
Neural Processing Letters
Solving Nonlinear Differential Equations by a Neural Network Method
ICCS '01 Proceedings of the International Conference on Computational Science-Part II
Simultaneous Lp-approximation order for neural networks
Neural Networks
Comparison of Stochastic Global Optimization Methods to Estimate Neural Network Weights
Neural Processing Letters
Revisiting tests for neglected nonlinearity using artificial neural networks
Neural Computation
The errors of simultaneous approximation of multivariate functions by neural networks
Computers & Mathematics with Applications
Pointwise approximation for neural networks
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Evaluating direction-of-change forecasting: Neurofuzzy models vs. neural networks
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.00 |
Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.