Using a mahalanobis-like distance to train radial basis neural networks

  • Authors:
  • J. M. Valls;R. Aler;O. Fernández

  • Affiliations:
  • Computer Science Department, Carlos III University, Leganés (Madrid), Spain;Computer Science Department, Carlos III University, Leganés (Madrid), Spain;Computer Science Department, Carlos III University, Leganés (Madrid), Spain

  • Venue:
  • IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Radial Basis Neural Networks (RBNN) can approximate any regular function and have a faster training phase than other similar neural networks. However, the activation of each neuron depends on the euclidean distance between a pattern and the neuron center. Therefore, the activation function is symmetrical and all attributes are considered equally relevant. This could be solved by altering the metric used in the activation function (i.e. using non-symmetrical metrics). The Mahalanobis distance is such a metric, that takes into account the variability of the attributes and their correlations. However, this distance is computed directly from the variance-covariance matrix and does not consider the accuracy of the learning algorithm. In this paper, we propose to use a generalized euclidean metric, following the Mahalanobis structure, but evolved by a Genetic Algorithm (GA). This GA searches for the distance matrix that minimizes the error produced by a fixed RBNN. Our approach has been tested on two domains and positive results have been observed in both cases.