BackPOLE: Back Propagation Based on Objective Learning Errors

  • Authors:
  • W. L. Tung;C. Quek

  • Affiliations:
  • -;-

  • Venue:
  • PRICAI '02 Proceedings of the 7th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

A new variant of the back propagation (BP) [1, 2] algorithm is proposed in this paper. The BP algorithm is widely used to tune the parameters of multi-layered neural networks and hybrid neural fuzzy systems [3]. Most applications of the BP algorithm are based on the negative gradient descent approach (NGD-BP) [2]. However, NGD-BP may not be suitable for use in neural fuzzy systems due to the poor interpretations of its error signals and over-reliance on prior knowledge of the node functions (i.e. the aggregation and activation functions of individual nodes). In neural fuzzy systems such as the POPFNN [4], the node functions are defined by the inference scheme adopted. This results in the NGD-BP algorithm being overly dependent on the gradient definition defined by the inference engine. That is, a set of customized learning equations is required for a given inference scheme. Hence, a change of the inference scheme requires a re-computation of the back propagation learning equations. This makes the neural fuzzy structure and learning highly dependent on the type of fuzzy inference engine employed. In contrast, the proposed BackPOLE algorithm generates intuitive error signals using a set of pre-defined objectives and is resilient to the change of inference schemes. This is highly desirable since the parameter-learning phase [2] of neural fuzzy systems can then be generalized and be independent of the fuzzy inference scheme. The BackPOLE algorithm has been implemented in a new neural fuzzy architecture named GenSoFNN [5] to demonstrate its effectiveness.