Multilayer feedforward networks are universal approximators
Neural Networks
The roots of backpropagation: from ordered derivatives to neural networks and political forecasting
The roots of backpropagation: from ordered derivatives to neural networks and political forecasting
Journal of Global Optimization
A New Approach for Function Approximation Based on Adaptive Particle Swarm Optimization
ICNC '07 Proceedings of the Third International Conference on Natural Computation - Volume 04
Computational Intelligence: An Introduction
Computational Intelligence: An Introduction
Training Winner-Take-All Simultaneous Recurrent Neural Networks
IEEE Transactions on Neural Networks
Training Spiking Neuronal Networks With Applications in Engineering Tasks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, nonlinear functions generated by randomly initialized multilayer perceptrons (MLPs) and simultaneous recurrent neural networks (SRNs) are learned by MLPs and SRNs. Training SRNs is a challenging task and a new learning algorithm - DEPSO is introduced. DEPSO is a standard particle swarm optimization (PSO) algorithm with the addition of a differential evolution step to aid in swarm convergence. The results from DEPSO are compared with the standard backpropagation (BP) and PSO algorithms. It is further verified that functions generated by SRNs are harder to learn than those generated by MLPs but DEPSO provides better learning capabilities for the functions generated by MLPs and SRNs as compared to BP and PSO. These three algorithms are also trained on several benchmark functions to confirm results.