Hidden state and reinforcement learning with instance-based stateidentification

  • Authors:
  • R. A. McCallum

  • Affiliations:
  • Dept. of Comput. Sci., Rochester Univ., NY

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

Real robots with real sensors are not omniscient. When a robot's next course of action depends on information that is hidden from the sensors because of problems such as occlusion, restricted range, bounded field of view and limited attention, we say the robot suffers from the hidden state problem. State identification techniques use history information to uncover hidden state. Some previous approaches to encoding history include: finite state machines, recurrent neural networks and genetic programming with indexed memory. A chief disadvantage of all these techniques is their long training time. This paper presents instance-based state identification, a new approach to reinforcement learning with state identification that learns with much fewer training steps. Noting that learning with history and learning in continuous spaces both share the property that they begin without knowing the granularity of the state space, the approach applies instance-based (or “memory-based”) learning to history sequences-instead of recording instances in a continuous geometrical space, we record instances in action-percept-reward sequence space. The first implementation of this approach, called Nearest Sequence Memory, learns with an order of magnitude fewer steps than several previous approaches