Training multilayer perceptrons with the extended Kalman algorithm
Advances in neural information processing systems 1
Subgrouping reduces complexity and speeds up learning in recurrent networks
Advances in neural information processing systems 2
Generalization by weight-elimination with application to forecasting
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Adaptive filter theory (2nd ed.)
Adaptive filter theory (2nd ed.)
Fast exact multiplication by the Hessian
Neural Computation
Extended Kalman filter-based pruning method for recurrent neural networks
Neural Computation
A pruning method for the recursive least squared algorithm
Neural Networks
Fuzzy Modeling Based on Ordinary Fuzzy Partitions and Nearest Neighbor Clustering
Journal of Intelligent and Robotic Systems
Discrete-time inverse optimal neural control for synchronous generators
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
In the classical deterministic Elman model, the estimation of parameters must be very accurate. Otherwise, the system performance is very poor. To improve the system performance, we can use a Kalman filtering algorithm to guide the operation of a trained recurrent neural network (RNN). In this case, during training, we need to estimate the state of hidden layer, as well as the weights of the RNN. This paper discusses how to use the dual extended Kalman filtering (DEKF) for this dual estimation and how to use our proposing DEKF for removing some unimportant weights from a trained RNN. In our approach, one Kalman algorithm is used for estimating the state of the hidden layer, and one recursive least square (RLS) algorithm is used for estimating the weights. After training, we use the error covariance matrix of the RLS algorithm to remove unimportant weights. Simulation showed that our approach is an effective joint-learning-pruning method for RNNs under the online operation.