Case-based reinforcement learning for dynamic inventory control in a multi-agent supply-chain system

  • Authors:
  • Chengzhi Jiang;Zhaohan Sheng

  • Affiliations:
  • Department of Management and Engineering, Nanjing University, 22 Hankou Road, Nanjing 210093, PR China;Department of Management and Engineering, Nanjing University, 22 Hankou Road, Nanjing 210093, PR China

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2009

Quantified Score

Hi-index 12.06

Visualization

Abstract

Reinforcement learning (RL) appeals to many researchers in recent years because of its generality. It is an approach to machine intelligence that learns to achieve the given goal by trial-and-error iterations with its environment. This paper proposes a case-based reinforcement learning algorithm (CRL) for dynamic inventory control in a multi-agent supply-chain system. Traditional time-triggered and event-triggered ordering policies remain popular because they are easy to implement. But in the dynamic environment, the results of them may become inaccurate causing excessive inventory (cost) or shortage. Under the condition of nonstationary customer demand, the S value of (T, S) and (Q, S) inventory review method is learnt using the proposed algorithm for satisfying target service level, respectively. Multi-agent simulation of a simplified two-echelon supply chain, where proposed algorithm is implemented, is run for a few times. The results show the effectiveness of CRL in both review methods. We also consider a framework for general learning method based on proposed one, which may be helpful in all aspects of supply-chain management (SCM). Hence, it is suggested that well-designed ''connections'' are necessary to be built between CRL, multi-agent system (MAS) and SCM.