Comments on “Learning convergence in the cerebellar model articulation controller”

  • Authors:
  • M. Brown;C. J. Harris

  • Affiliations:
  • Dept. of Aeronaut. & Astronaut., Southampton Univ.;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

The commenter refers to the paper by Wong-Sideris (ibid. vol.3, p.115-21 (1992)) claiming that the original Albus CMAC (or binary CMAC) is capable of learning an arbitrary multivariate lookup table, the linear optimization process is strictly positive definite, and that the basis functions are linearly independent, given sufficient training data. In recent work by Brown et al. (1994), however, it has been proved that the multivariate binary CMAC is unable to learn certain multivariate lookup tables and the number of such orthogonal functions increases exponentially as the generalization parameter is increased. A simple 2D orthogonal function is presented as a counterexample to the original theory. It is also demonstrated that the basis functions are-always linearly dependent, both for the univariate and the multivariate case, and hence the linear optimization process is only positive semi-definite and there always exists an infinite number of possible optimal weight vectors