System identification: theory for the user
System identification: theory for the user
The local minima-free condition of feedforward neural networks forouter-supervised learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A hybrid linear/nonlinear training algorithm for feedforward neural networks
IEEE Transactions on Neural Networks
Identification and control of dynamical systems using neural networks
IEEE Transactions on Neural Networks
Comparison of four neural net learning methods for dynamic system identification
IEEE Transactions on Neural Networks
Training recurrent neural networks: why and how? An illustration in dynamical process modeling
IEEE Transactions on Neural Networks
Training feedforward networks with the Marquardt algorithm
IEEE Transactions on Neural Networks
Modeling of underwater vehicle's movement dynamics using neural networks
ACMOS'09 Proceedings of the 11th WSEAS international conference on Automatic control, modelling and simulation
Hi-index | 0.01 |
Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series-parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series-parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex.