Guiding inference through relational reinforcement learning

  • Authors:
  • Nima Asgharbeygi;Negin Nejati;Pat Langley;Sachiyo Arai

  • Affiliations:
  • Computational Learning Laboratory, Center for the Study of Language and Information, Stanford University, Stanford, CA;Computational Learning Laboratory, Center for the Study of Language and Information, Stanford University, Stanford, CA;Computational Learning Laboratory, Center for the Study of Language and Information, Stanford University, Stanford, CA;Computational Learning Laboratory, Center for the Study of Language and Information, Stanford University, Stanford, CA

  • Venue:
  • ILP'05 Proceedings of the 15th international conference on Inductive Logic Programming
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reasoning plays a central role in intelligent systems that operate in complex situations that involve time constraints. In this paper, we present the Adaptive Logic Interpreter, a reasoning system that acquires a controlled inference strategy adapted to the scenario at hand, using a variation on relational reinforcement learning. Employing this inference mechanism in a reactive agent architecture lets the agent focus its reasoning on the most rewarding parts of its knowledge base and hence perform better under time and computational resource constraints. We present experiments that demonstrate the benefits of this approach to reasoning in reactive agents, then discuss related work and directions for future research.