Nonlinear prediction by reinforcement learning

  • Authors:
  • Takashi Kuremoto;Masanao Obayashi;Kunikazu Kobayashi

  • Affiliations:
  • Dept. of Computer Science and Systems Eng., Eng. Fac., Yamaguchi Univ., Ube, Yamaguchi, Japan;Dept. of Computer Science and Systems Eng., Eng. Fac., Yamaguchi Univ., Ube, Yamaguchi, Japan;Dept. of Computer Science and Systems Eng., Eng. Fac., Yamaguchi Univ., Ube, Yamaguchi, Japan

  • Venue:
  • ICIC'05 Proceedings of the 2005 international conference on Advances in Intelligent Computing - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Artificial neural networks have presented their powerful ability and efficiency in nonlinear control, chaotic time series prediction, and many other fields. Reinforcement learning, which is the last learning algorithm by awarding the learner for correct actions, and punishing wrong actions, however, is few reported to nonlinear prediction. In this paper, we construct a multi-layer neural network and using reinforcement learning, in particular, a learning algorithm called Stochastic Gradient Ascent (SGA) to predict nonlinear time series. The proposed system includes 4 layers: input layer, hidden layer, stochastic parameter layer and output layer. Using stochastic policy, the system optimizes its weights of connections and output value to obtain its prediction ability of nonlinear dynamics. In simulation, we used the Lorenz system, and compared short-term prediction accuracy of our proposed method with classical learning method.