Neurocontrollers trained with rules extracted by a genetic assisted reinforcement learning system

  • Authors:
  • R. A. Zitar;M. H. Hassoun

  • Affiliations:
  • Dept. of Math. & Comput. Sci., United Arab Emirates Univ., Al-Ain;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a novel system for rule extraction of temporal control problems and presents a new way of designing neurocontrollers. The system employs a hybrid genetic search and reinforcement learning strategy for extracting the rules. The learning strategy requires no supervision and no reference model. The extracted rules are weighted micro rules that operate on small neighborhoods of the admissable control space. A further refinement of the extracted rules is achieved by applying additional genetic search and reinforcement to reduce the number of extracted micro rules. This process results in a smaller set of macro rules which can be used to train a feedforward multilayer perceptron neurocontroller. The micro rules or the macro rules may also be utilized directly in a table look-up controller. As an example of the macro rules-based neurocontroller, we chose four benchmarks. In the first application we verify the capability of our system to learn optimal linear control strategies. The other three applications involve engine idle speed control, bioreactor control, and stabilizing two poles on a moving cart. These problems are highly nonlinear, unstable, and may include noise and delays in the plant dynamics. In terms of retrievals; the neurocontrollers generally outperform the controllers using a table look-up method. Both controllers, though, show robustness against noise disturbances and plant parameter variations