Reinforcement learning control with adaptive gain for a Saccharomyces cerevisiae fermentation process

  • Authors:
  • Dazi Li;Li Qian;Qibing Jin;Tianwei Tan

  • Affiliations:
  • Department of Automation, College of Information Science & Technology, Beijing University of Chemical Technology, Beijing 100029, PR China;Department of Automation, College of Information Science & Technology, Beijing University of Chemical Technology, Beijing 100029, PR China;Department of Automation, College of Information Science & Technology, Beijing University of Chemical Technology, Beijing 100029, PR China;Beijing Key Laboratory of Bioprocess, College of Life Science and Technology, Beijing University of Chemical Technology, Beijing 100029, PR China

  • Venue:
  • Applied Soft Computing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

A tight and robust yeast fermentation controller is usually difficult to achieve because of the inherent uncertainty, nonlinear, and time-varying characteristics of the yeast fermentation dynamic process. This paper presented an alternative method for yeast fermentation process control by hybrid reinforcement learning algorithm and fuzzy logic. The fuzzy logic was used to adjust the weighting gain of control action adaptively from reinforcement learning. It led to faster tracking and helped to alleviate the overshoot of the controller. The improved multi-step action Q-learning control algorithm was developed and demonstrated through studies on ethanol concentration control of the yeast fermentation process. Experimental results show that the improved multi-step action Q-learning controller has much lower overshoot, faster tracking, shorter transition, and smoother control signal than the advanced PID controller.