On Reinforcement Learning in Genetic Regulatory Networks

  • Authors:
  • Babak Faryabi;Aniruddha Datta;Edward R. Dougherty

  • Affiliations:
  • Department of Electrical and Computer Engineering, Texas A&MUniversity, College Station, TX 77843. bfariabi@ece.tamu.edu;Department of Electrical and Computer Engineering, Texas A&MUniversity, College Station, TX 77843. datta@ece.tamu.edu;Department of Electrical and Computer Engineering, Texas A&MUniversity, College Station, TX 77843/ Computational Biology Division, Translational Genomics Research Institute, Phoen

  • Venue:
  • SSP '07 Proceedings of the 2007 IEEE/SP 14th Workshop on Statistical Signal Processing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The control of probabilistic Boolean networks as a model of genetic regulatory networks is formulated as an optimal stochastic control problem and has been solved using dynamic programming; however, the proposed methods fail when the number of genes in the network goes beyond a small number. Their complexity exponentially increases with the number of genes due to the estimation of model-dependent probability distributions, and the expected curse of dimensionality associated with dynamic programming algorithm. We propose a model-free approximate stochastic control method based on reinforcement learning thatmitigates the twin curses of dimensionality and provides polynomial time complexity. By using a simulator, the proposed method eliminates the complexity of estimating the probability distributions. The method can be applied on networks for which dynamic programming cannot be used owing to computational limitations. Experimental results demonstrate that the performance of the method is close to optimal stochastic control.