Kalman filtering with real-time applications
Kalman filtering with real-time applications
Adaptation and tracking in system identification—a survey
Automatica (Journal of IFAC) - Identification and system parameter estimation
The nature of mathematical modeling
The nature of mathematical modeling
Kalman Filtering and Neural Networks
Kalman Filtering and Neural Networks
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Neural Networks for Modelling and Control of Dynamic Systems: A Practitioner's Handbook
Neural Networks for Modelling and Control of Dynamic Systems: A Practitioner's Handbook
Extended Kalman filter training of neural networks on a SIMD parallel machine
Journal of Parallel and Distributed Computing
Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory
Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Generally, training neural networks with the global extended Kalman filter (GEKF) technique exhibits excellent performance at the expense of a large increase in computational costs which can become prohibitive even for networks of moderate size. This drawback was previously addressed by heuristically decoupling some of the weights of the networks. Inevitably, ad hoc decoupling leads to a degradation in the quality (accuracy) of the resultant neural networks. In this paper, we present an algorithm that emulates the accuracy of GEKF, but avoids the construction of the state covariance matrix-the source of the computational bottleneck in GEKF. In the proposed algorithm, all the synaptic weights remain connected while the amount of computer memory required is similar to (or cheaper than) the memory requirements in the decoupling schemes. We also point out that the new method can be extended to derivative-free nonlinear Kalman filters, such as the unscented Kalman filter and ensemble Kalman filters.