Multilayer feedforward networks are universal approximators
Neural Networks
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
A new approximating model for the time invariant nonlinear operators with fading memory
CIMMACS'09 Proceedings of the 8th WSEAS International Conference on Computational intelligence, man-machine systems and cybernetics
Historical consistent complex valued recurrent neural network
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Recurrent kernel machines: Computing with infinite echo state networks
Neural Computation
A technical trading indicator based on dynamical consistent neural networks
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
Learning long term dependencies with recurrent neural networks
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Hi-index | 0.00 |
Neural networks represent a class of functions for the efficient identification and forecasting of dynamical systems. It has been shown that feedforward networks are able to approximate any (Borel-)measurable function on a compact domain [1,2,3]. Recurrent neural networks (RNNs) have been developed for a better understanding and analysis of open dynamical systems. Compared to feedforward networks they have several advantages which have been discussed extensively in several papers and books, e.g. [4]. Still the question often arises if RNNs are able to map every open dynamical system, which would be desirable for a broad spectrum of applications. In this paper we give a proof for the universal approximation ability of RNNs in state space model form. The proof is based on the work of Hornik, Stinchcombe, and White about feedforward neural networks [1].