Accelerating the Convergence of EM-Based Training Algorithms for RBF Networks
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
On the Need for a Neural Abstract Machine
Sequence Learning - Paradigms, Algorithms, and Applications
A new EM-based training algorithm for RBF networks
Neural Networks
Improving generalization capabilities of dynamic neural networks
Neural Computation
Segmented-memory recurrent neural networks
IEEE Transactions on Neural Networks
An adaptive control for AC servo system using recurrent fuzzy neural network
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part II
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
Hi-index | 0.00 |
In this work, a probabilistic model is established for recurrent networks. The expectation-maximization (EM) algorithm is then applied to derive a new fast training algorithm for recurrent networks through mean-field approximation. This new algorithm converts training a complicated recurrent network into training an array of individual feedforward neurons. These neurons are then trained via a linear weighted regression algorithm. The training time has been improved by five to 15 times on benchmark problems