Reinforcement learning for dynamic environment: a classification of dynamic environments and a detection method of environmental changes

  • Authors:
  • Masato Nagayoshi;Hajime Murao;H. Tamaki

  • Affiliations:
  • Niigata College of Nursing, Joetsu, Japan 943-0147;Faculty of Cross-Cultural Studies, Kobe University, Kobe, Japan 657-8501;Graduate School of Engineering, Kobe University, Kobe, Japan 657-8501

  • Venue:
  • Artificial Life and Robotics
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Engineers and researchers are paying more attention to reinforcement learning (RL) as a key technique for realizing computational intelligence such as adaptive and autonomous decentralized systems. In general, it is not easy to put RL into practical use. In prior research our approach mainly dealt with the problem of designing state and action spaces and we have proposed an adaptive co-construction method of state and action spaces. However, it is more difficult to design state and action spaces in dynamic environments than in static ones. Therefore, it is even more effective to use an adaptive co-construction method of state and action spaces in dynamic environments. In this paper, our approach mainly deals with a problem of adaptation in dynamic environments. First, we classify tasks of dynamic environments and propose a detection method of environmental changes to adapt to dynamic environments. Next, we conducted computational experiments using a so-called "path planning problem" with a slowly changing environment where the aging of the system is assumed. The performances of a conventional RL method and the proposed detection method were confirmed.