Improving the Generalization Capability of the Binary CMAC

  • Authors:
  • Gábor Horváth

  • Affiliations:
  • -

  • Venue:
  • IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 3 - Volume 3
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper deals with some important questions of the binary CMAC neural networks. CMAC - which belongs to the family of feed-forward networks with a single linear trainable layer - has some attractive features. The most important ones are its extremely fast learning capability and the special architecture that lets effective digital hardware implementation possible. Although the CMAC architecture was proposed in the middle of the seventies quite a lot open questions have been left even for today. Among them, the most important ones are its modeling and generalization capabilities. While some essential questions of its modeling capability were addressed in the literature, no detailed analysis of its generalization properties can be found. This paper shows that the CMAC may have significant generalization error, even in one-dimensional case, where the network can learn any training data set exactly. The paper shows that this generalization error is caused mainly by the training rule of the network. It derives a general expression of the generalization error and proposes a modified training algorithm that helps to reduce this error significantly.