Multilayer feedforward networks are universal approximators
Neural Networks
Training multilayer perceptrons with the extended Kalman algorithm
Advances in neural information processing systems 1
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Advances in neural information processing systems 2
Extended Kalman filter-based pruning method for recurrent neural networks
Neural Computation
Kalman filter implementation of self-organizing feature maps
Neural Computation
Neural methods for antenna array signal processing: a review
Signal Processing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sequential Monte Carlo Methods to Train Neural Network Models
Neural Computation
Hierarchical Bayesian Models for Regularization in Sequential Learning
Neural Computation
Computational Intelligence: Concepts to Implementations
Computational Intelligence: Concepts to Implementations
Adaptive control of a nonlinear dc motor drive using recurrent neural networks
Applied Soft Computing
A recursive algorithm for nonlinear least-squares problems
Computational Optimization and Applications
Feedforward neural networks training with optimal bounded ellipsoid algorithm
NN'08 Proceedings of the 9th WSEAS International Conference on Neural Networks
Modelling of Dynamic Systems Using Generalized RBF Neural Networks Based on Kalman Filter Mehtod
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
WSEAS Transactions on Computers
Nonlinear Bayesian Filters for Training Recurrent Neural Networks
MICAI '08 Proceedings of the 7th Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Recurrent neural networks training with stable bounding ellipsoid algorithm
IEEE Transactions on Neural Networks
Hi-index | 0.14 |
The relationship between backpropagation and extended Kalman filtering for training multilayer perceptrons is examined. These two techniques are compared theoretically and empirically using sensor imagery. Backpropagation is a technique from neural networks for assigning weights in a multilayer perceptron. An extended Kalman filter can also be used for this purpose. A brief review of the multilayer perceptron and these two training methods is provided. Then, it is shown that backpropagation is a degenerate form of the extended Kalman filter. The training rules are compared in two examples: an image classification problem using laser radar Doppler imagery and a target detection problem using absolute range images. In both examples, the backpropagation training algorithm is shown to be three orders of magnitude less costly than the extended Kalman filter algorithm in terms of a number of floating-point operations.