Radial basis functions for multivariable interpolation: a review
Algorithms for approximation
Regularization theory and neural networks architectures
Neural Computation
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
An Equivalence Between Sparse Approximation and Support Vector Machines
An Equivalence Between Sparse Approximation and Support Vector Machines
Learning methods for radial basis function networks
Future Generation Computer Systems
Hi-index | 0.00 |
We discuss two kernel based learning methods, namely the Regularization Networks (RN) and the Radial Basis Function (RBF) Networks. The RNs are derived from the regularization theory, they had been studied thoroughly from a function approximation point of view, and they posses a sound theoretical background. The RBF networks represent a model of artificial neural networks with both neuro-physiological and mathematical motivation. In addition they may be treated as a generalized form of Regularization Networks. We demonstrate the performance of both approaches on experiments, including both benchmark and real-life learning tasks. We claim that RN and RBF networks are comparable in terms of generalization error, but they differ with respect to their model complexity. The RN approach usually leads to solutions with higher number of base units, thus, the RBF networks can be used as a 'cheaper' alternative. This allows to utilize the RBF networks in modeling tasks with large amounts of data, such as time series prediction or semantic web classification.