Complexity Analysis of Real-Time Reinforcement Learning Applied to Finding Shortest Paths in Deterministic Domains

  • Authors:
  • Sven Koenig;Reid G. Simmons

  • Affiliations:
  • -;-

  • Venue:
  • Complexity Analysis of Real-Time Reinforcement Learning Applied to Finding Shortest Paths in Deterministic Domains
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

This report analyzes the complexity of on-line reinforcement learning algorithms, namely asynchronous real-time versions of Q-learning and value-iteration, applied to the problems of reaching any goal state from the given start state and finding shortest paths from all states to a goal state. Previous work had concluded that, in many cases, initially uninformed (i.e., tabula rasa) reinforcement learning was exponential for such problems, or that it was tractable (i.e., of polynomial time-complexity) only if the learning algorithm was augmented. We prove that, to the contrary, the algorithms are tractable with only a simple change in the task representation ("penalizing the agent for action executions") or initialization ("initializing high"). We provide tight bounds on their worst-case complexity, and show how the complexity is even smaller if the state space has certain special properties. We compare these reinforcement learning algorithms to other uninformed on-line search methods and to informed off-line search methods, and investigate how initial knowledge of the topology of the state space can decrease their complexity. We also present two novel algorithms, the bi-directional Q-learning algorithm and the bi-directional value-iteration algorithm, for finding shortest paths from all states to a goal state, and show that they are no more complex than their counterparts for reaching a goal state from a given start state. The worst-case analysis of the reinforcement learning algorithms is complemented by an empirical study of their average-case complexity in three domains.