An algorithm for probabilistic planning
Artificial Intelligence - Special volume on planning and scheduling
Proceedings of the 1999 international conference on Logic programming
Complexity of finite-horizon Markov decision process problems
Journal of the ACM (JACM)
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Declarative problem-solving using the DLV system
Logic-based artificial intelligence
Reasoning about actions in a probabilistic setting
Eighteenth national conference on Artificial intelligence
Contingent planning under uncertainty via stochastic satisfiability
Artificial Intelligence - special issue on planning with uncertainty and incomplete information
ASSAT: computing answer sets of a logic program by SAT solvers
Artificial Intelligence - Special issue on nonmonotonic reasoning
Domain-dependent knowledge in answer set planning
ACM Transactions on Computational Logic (TOCL)
A new approach to hybrid probabilistic logic programs
Annals of Mathematics and Artificial Intelligence
Probabilistic Planning in Hybrid Probabilistic Logic Programs
SUM '07 Proceedings of the 1st international conference on Scalable Uncertainty Management
A Logical Approach to Qualitative and Quantitative Reasoning
ECSQARU '07 Proceedings of the 9th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Symbolic dynamic programming for first-order MDPs
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Towards the computation of stable probabilistic model semantics
KI'06 Proceedings of the 29th annual German conference on Artificial intelligence
Pushing the envelope: planning, propositional logic, and stochastic search
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Incomplete knowledge in hybrid probabilistic logic programs
JELIA'06 Proceedings of the 10th European conference on Logics in Artificial Intelligence
Probabilistic reasoning about actions in nonmonotonic causal theories
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Towards a more practical hybrid probabilistic logic programming framework
PADL'05 Proceedings of the 7th international conference on Practical Aspects of Declarative Languages
Probabilistic Reasoning by SAT Solvers
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Extended Fuzzy Logic Programs with Fuzzy Answer Set Semantics
SUM '09 Proceedings of the 3rd International Conference on Scalable Uncertainty Management
ECSQARU'11 Proceedings of the 11th European conference on Symbolic and quantitative approaches to reasoning with uncertainty
SUM'11 Proceedings of the 5th international conference on Scalable uncertainty management
Hi-index | 0.00 |
Knowledge representation is an important issue in reinforcement learning. Although logic programming with answer set semantics is a standard in knowledge representation, it has not been exploited in reinforcement learning to resolve its knowledge representation issues. In this paper, we present a logic programming framework to reinforcement learning, by integrating reinforcement learning, in MDP environments, with normal hybrid probabilistic logic programs with probabilistic answer set semantics [29], that is capable of representing domain-specific knowledge. We show that any reinforcement learning problem, MT, can be translated into a normal hybrid probabilistic logic program whose probabilistic answer sets correspond to trajectories in MT. We formally prove the correctness of our approach. Moreover, we show that the complexity of finding a policy for a reinforcement learning problem in our approach is NP-complete. In addition, we show that any reinforcement learning problem, MT, can be encoded as a classical logic program with answer set semantics, whose answer sets corresponds to valid trajectories in MT. We also show that a reinforcement learning problem can be encoded as a SAT problem. In addition, we present a new high level action description language that allows the factored representation of MDP.