Biological arm motion through reinforcement learning

  • Authors:
  • Jun Izawa;Toshiyuki Kondo;Koji Ito

  • Affiliations:
  • Human and Information Science Laboratory, NTT Communication Science Laboratories 3-1, Sensory and Motor Research Group, Morinosato-Wakamiya, 243-01, Atsugi-shi, Japan;Interdisciplinary Graduate School Science and Engineering, Tokyo Institute of Technology, Department of Computational Intelligence and Systems Science, Morinosato-Wakamiya, 243-01, Yokohama, Japan;Interdisciplinary Graduate School Science and Engineering, Tokyo Institute of Technology, Department of Computational Intelligence and Systems Science, Morinosato-Wakamiya, 243-01, Yokohama, Japan

  • Venue:
  • Biological Cybernetics
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

The present paper discusses an optimal learning control method using reinforcement learning for biological systems with a redundant actuator. It is difficult to apply reinforcement learning to biological control systems because of the redundancy in muscle activation space. We solve this problem with the following method. First, we divide the control input space into two subspaces according to a priority order of learning and restrict the search noise for reinforcement learning to the first priority subspace. Then the constraint is reduced as the learning progresses, with the search space extending to the second priority subspace. The higher priority subspace is designed so that the impedance of the arm can be high. A smooth reaching motion is obtained through reinforcement learning without any previous knowledge of the arm’s dynamics.