Unsupervised and supervised learning in radial-basis-function networks

  • Authors:
  • Friedhelm Schwenker;Hans A. Kestler;Güther Palm

  • Affiliations:
  • Univ. of Ulm, Germany;Univ. of Ulm, Germany;Univ. of Ulm, Germany

  • Venue:
  • Self-Organizing neural networks
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning in radial basis function (RBF) networks is the topic of this chapter. Whereas multilayer perceptions (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initalization of the MLP's parameterms, an RBF network may be trained in different wyas. We distinguish one-, two-, and three phase learning. A very common learning scheme for RBF networks is two phase learning. Here, the two layers of an RBF network are trained seperately. First the RBF layer is calculated, including the RBF centers and scaling parameters, and then the weights of the output layer are adapted. The RBF centers may be trained through unsupervised or supervised learning procedures utilizing clustering, vector quantization of classification tree algorithms. The output layer of the network is adatped by supervised learning. Numerical experiments of RBF classifiers trained by two phase learning are presented for the classifiication of 3D visual objects and the recognition of hand-written digits. It can be observed that the performance of RBF classifiers trained with two phase learning can be improved through a thrid back propagation-like learning phase of teh RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three phase learning in RBF networks. A practical advantage of two and three phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vecgtor (SV) learning in RBFnetworks is a special type of one phase learning, where only the output layer wieghts of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including nearest neighbor classifiers, learning vecgtor quantization networks and RBF classifiers trained through two phase, three phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three phase learning are superior to the results of two phase learning.