Bridging the gap between reinforcement learning and knowledge representation: a logical off- and on-policy framework

  • Authors:
  • Emad Saad

  • Affiliations:
  • Department of Computer Science, Gulf University for Science and Technology, Kuwait

  • Venue:
  • ECSQARU'11 Proceedings of the 11th European conference on Symbolic and quantitative approaches to reasoning with uncertainty
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Knowledge Representation is an important issue in reinforcement learning. In this paper, we bridge the gap between reinforcement learning and knowledge representation, by providing a rich knowledge representation framework, based on normal logic programs with answer set semantics, that is capable of solving model-free reinforcement learning problems for more complex domains and exploits the domain-specific knowledge. We prove the correctness of our approach. We show that the complexity of finding an offline and online policy for a model-free reinforcement learning problem in our approach is NP-complete. Moreover, we show that any model-free reinforcement learning problem in an MDP environment can be encoded as a SAT problem. The importance of that is model-free reinforcement learning problems can be now solved as SAT problems.