Advanced RNN Based NARMA Predictors
Journal of VLSI Signal Processing Systems
Natural Language Grammatical Inference with Recurrent Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Localization of Sound Sources by Means of Recurrent Neural Networks
RSCTC '00 Revised Papers from the Second International Conference on Rough Sets and Current Trends in Computing
On the Need for a Neural Abstract Machine
Sequence Learning - Paradigms, Algorithms, and Applications
Time Series Generation by Recurrent Neural Networks
Annals of Mathematics and Artificial Intelligence
Spatiotemporal Connectionist Networks: A Taxonomy and Review
Neural Computation
Learning Chaotic Attractors by Neural Networks
Neural Computation
Prediction of chaotic time series with NARX recurrent dynamic neural networks
ICAI'08 Proceedings of the 9th WSEAS International Conference on International Conference on Automation and Information
Analysis and modeling of multivariate chaotic time series based on neural network
Expert Systems with Applications: An International Journal
The use of NARX neural networks to predict chaotic time series
WSEAS Transactions on Computer Research
Learning from demonstration in robots: Experimental comparison of neural architectures
Robotics and Computer-Integrated Manufacturing
Biological time series segmentation using dynamic neural network model
Optical Memory and Neural Networks
SVC implementation using neural networks for an AC electrical railway
WSEAS Transactions on Circuits and Systems
Predicting chaotic time series by boosted recurrent neural networks
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
Neural network based modelling of environmental variables: A systematic approach
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.00 |
Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Ψ(u(t-nu), ..., u(t-1), u(t), y(t-ny), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, nu and ny are the input and output order, and the function Ψ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power