A Constrained Optimization Approach to Preserving Prior Knowledge During Incremental Training

  • Authors:
  • S. Ferrari;M. Jensenius

  • Affiliations:
  • Dept. of Mech. Eng. & Mater. Sci., Duke Univ., Durham, NC;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a supervised neural network training technique based on constrained optimization is developed for preserving prior knowledge of an input-output mapping during repeated incremental training sessions. The prior knowledge, referred to as long-term memory (LTM), is expressed in the form of equality constraints obtained by means of an algebraic training technique. Incremental training, which may be used to learn new short-term memories (STMs) online, is then formulated as an error minimization problem subject to equality constraints. The solution of this problem is simplified by implementing an adjoined error gradient that circumvents direct substitution and exploits classical backpropagation. A target application is neural network function approximation in adaptive critic designs. For illustrative purposes, constrained training is implemented to update an adaptive critic flight controller, while preserving prior knowledge of an established performance baseline that consists of classical gain-scheduled controllers. It is shown both analytically and numerically that the LTM is accurately preserved while the controller is repeatedly trained over time to assimilate new STMs.