Nonlinear Bayesian Filters for Training Recurrent Neural Networks

  • Authors:
  • Ienkaran Arasaratnam;Simon Haykin

  • Affiliations:
  • Cognitive Systems Laboratory Department of Electrical & Computer Engineering, McMaster University, Hamilton, L8S 4K1;Cognitive Systems Laboratory Department of Electrical & Computer Engineering, McMaster University, Hamilton, L8S 4K1

  • Venue:
  • MICAI '08 Proceedings of the 7th Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present nonlinear Bayesian filters for training recurrent neural networks with a special emphasis on a novel, more accurate, derivative-free member of the approximate Bayesian filter family called the cubature Kalman filter. We discuss the theory of Bayesian filters, which is rooted in the state-space modeling of the dynamic system in question and the linear estimation principle. For improved numerical stability and optimal performance during training period, a number of techniques of how to tune Bayesian filters is suggested. We compare the predictability of various Bayesian filter-trained recurrent neural networks using a chaotic time-series. From the empirical results, we conclude that the performance may be greatly improved by the new square-root cubature Kalman filter.