Combining expert neural networks using reinforcement feedback for learning primitive grasping behavior

  • Authors:
  • M. A. Moussa

  • Affiliations:
  • Sch. of Eng., Univ. of Guelph, Ont., Canada

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper present an architecture for combining a mixture of experts. The architecture has two unique features: 1) it assumes no prior knowledge of the size or structure of the mixture and allows the number of experts to dynamically expand during training, and 2) reinforcement feedback is used to guide the combining/expansion operation. The architecture is particularly suitable for applications when there is a need to approximate a many-to-many mapping. An example of such a problem is the task of training a robot to grasp arbitrarily shaped objects. This task requires the approximation of a many-to-many mapping, since various configurations can be used to grasp an object, and several objects can share the same grasping configuration. Experiments in a simulated environment using a 28-object database showed how the algorithm dynamically combined and expanded a mixture of neural networks to achieve the learning task. The paper also presents a comparison with two other nonlearning approaches.