Reinforcement learning-based dynamic adaptation planning method for architecture-based self-managed software

  • Authors:
  • Dongsun Kim;Sooyong Park

  • Affiliations:
  • Department of Computer Science and Engineering, Sogang University, Shinsoo-dong, Mapo-Gu, Seoul, Korea;Department of Computer Science and Engineering, Sogang University, Shinsoo-dong, Mapo-Gu, Seoul, Korea

  • Venue:
  • SEAMS '09 Proceedings of the 2009 ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, software systems face dynamically changing environments, and the users of the systems provide changing requirements at run-time. Self-management is emerging to deal with these problems. One of the key issues to achieve self-management is planning for selecting appropriate structure or behavior of self-managed software systems. There are two types of planning in self-management: off-line and on-line planning. Recent discussion has focused on off-line planning which provides static relationships between environmental changes and software configurations. In on-line planning, a software system can autonomously derive mappings between environmental changes and software configurations by learning its dynamic environment and using its prior experience. In this paper, we propose a reinforcement learning-based approach to on-line planning in architecture-based self-management. This approach enables a software system to improve its behavior by learning the results of its behavior and by dynamically changing its plans based on the learning in the presence of environmental changes. The paper presents a case study to illustrate the approach and its result shows that reinforcement learning-based on-line planning is effective for architecture-based self-management.