Learning to generate articulated behavior through the bottom-up and the top-down interaction processes

  • Authors:
  • Jun Tani

  • Affiliations:
  • Brain Science Institute, RIKEN, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan

  • Venue:
  • Neural Networks
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel hierarchical neural network architecture for sensory-motor learning and behavior generation is proposed. Two levels of forward model neural networks are operated on different time scales while parametric interactions are allowed between the two network levels in the bottom-up and top-down directions. The models are examined through experiments of behavior learning and generation using a real robot arm equipped with a vision system. The results of the learning experiments showed that the behavioral patterns are learned by self-organizing the behavioral primitives in the lower level and combining the primitives sequentially in the higher level. The results contrast with prior work by Pawelzik et al. [Neural Comput. 8 (1996) 340], Tani and Nolfi [From animals to animats, 1998], and Wolpert and Kawato [Neural Networks 11 (1998) 1317] in that the primitives are represented in a distributed manner in the network in the present scheme whereas, in the prior work, the primitives were localized in specific modules in the network. Further experiments of on-line planning showed that the behavior could be generated robustly against a background of real world noise while the behavior plans could be modified flexibly in response to changes in the environment. It is concluded that the interaction between the bottom-up process of recalling the past and the top-down process of predicting the future enables both robust and flexible situated behavior.