Training multilayer perceptrons with the extended Kalman algorithm
Advances in neural information processing systems 1
Kalman filtering: theory and practice
Kalman filtering: theory and practice
Introduction to parallel computing: design and analysis of algorithms
Introduction to parallel computing: design and analysis of algorithms
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Parallel Implementations of Backpropagation Neural Networks on Transputers: A Study of Training Set Parallelism
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Memory-efficient fully coupled filtering approach for observational model building
IEEE Transactions on Neural Networks
Environmental Modelling & Software
Hi-index | 0.00 |
The extended Kalman filter (EKF) algorithm has been shown to be advantageous for neural network trainings. However, unlike the backpropagation (BP), many matrix operations are needed for the EKF algorithm and therefore greatly increase the computational complexity. This paper presents a method to do the EKF training on a SIMD parallel machine. We use a multistream decoupled extended Kalman filter (DEKF) training algorithm which can provide efficient use of the parallel resource and more improved trained network weights. From the overall design consideration of the DEKF algorithm and the consideration of maximum usage of the parallel resource, the multistream DEKF training is realized on a MasPar SIMD parallel machine. The performance of the parallel DEKF training algorithm is studied. Comparisons are performed to investigate pattern and batch-form trainings for both EKF and BP training algorithms.