Minimizing mean weighted tardiness in unrelated parallel machine scheduling with reinforcement learning

  • Authors:
  • Zhicong Zhang;Li Zheng;Na Li;Weiping Wang;Shouyan Zhong;Kaishun Hu

  • Affiliations:
  • Department of Industrial Engineering, School of Mechanical Engineering, Dongguan University of Technology, Songshan Lake District, Dongguan 523808, Guangdong Province, China;Department of Industrial Engineering, Tsinghua University, Beijing 100084, China;Department of Industrial Engineering, Shanghai Jiao Tong University, Shanghai 200240, China;School of Mechanical Engineering, Dongguan University of Technology, China;School of Mechanical Engineering, Dongguan University of Technology, China;Department of Industrial Engineering, School of Mechanical Engineering, Dongguan University of Technology, Songshan Lake District, Dongguan 523808, Guangdong Province, China

  • Venue:
  • Computers and Operations Research
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

We address an unrelated parallel machine scheduling problem with R-learning, an average-reward reinforcement learning (RL) method. Different types of jobs dynamically arrive in independent Poisson processes. Thus the arrival time and the due date of each job are stochastic. We convert the scheduling problems into RL problems by constructing elaborate state features, actions, and the reward function. The state features and actions are defined fully utilizing prior domain knowledge. Minimizing the reward per decision time step is equivalent to minimizing the schedule objective, i.e. mean weighted tardiness. We apply an on-line R-learning algorithm with function approximation to solve the RL problems. Computational experiments demonstrate that R-learning learns an optimal or near-optimal policy in a dynamic environment from experience and outperforms four effective heuristic priority rules (i.e. WSPT, WMDD, ATC and WCOVERT) in all test problems.