Three-dimensional object recognition from single two-dimensional images
Artificial Intelligence
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Fundamentals of speech recognition
Fundamentals of speech recognition
Approximation and radial-basis-function networks
Neural Computation
Regularization theory and neural networks architectures
Neural Computation
Visual learning and recognition of 3-D objects from appearance
International Journal of Computer Vision
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
International Journal of Computer Vision
FORMS: a flexible object recognition and modeling system
International Journal of Computer Vision
Self-organizing maps
Advances in kernel methods: support vector learning
Advances in kernel methods: support vector learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A Trainable System for Object Detection
International Journal of Computer Vision - special issue on learning and vision at the center for biological and computational learning, Massachusetts Institute of Technology
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Advanced Methods in Neural Computing
Advanced Methods in Neural Computing
Distortion Invariant Object Recognition in the Dynamic Link Architecture
IEEE Transactions on Computers
Probabilistic Object Recognition Using Multidimensional Receptive Field Histograms
ICPR '96 Proceedings of the 13th International Conference on Pattern Recognition - Volume 2
Hi-index | 0.00 |
Learning in radial basis function (RBF) networks is the topic of this chapter. Whereas multilayer perceptions (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initalization of the MLP's parameterms, an RBF network may be trained in different wyas. We distinguish one-, two-, and three phase learning. A very common learning scheme for RBF networks is two phase learning. Here, the two layers of an RBF network are trained seperately. First the RBF layer is calculated, including the RBF centers and scaling parameters, and then the weights of the output layer are adapted. The RBF centers may be trained through unsupervised or supervised learning procedures utilizing clustering, vector quantization of classification tree algorithms. The output layer of the network is adatped by supervised learning. Numerical experiments of RBF classifiers trained by two phase learning are presented for the classifiication of 3D visual objects and the recognition of hand-written digits. It can be observed that the performance of RBF classifiers trained with two phase learning can be improved through a thrid back propagation-like learning phase of teh RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three phase learning in RBF networks. A practical advantage of two and three phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vecgtor (SV) learning in RBFnetworks is a special type of one phase learning, where only the output layer wieghts of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including nearest neighbor classifiers, learning vecgtor quantization networks and RBF classifiers trained through two phase, three phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three phase learning are superior to the results of two phase learning.