Kernel based learning methods: regularization networks and RBF networks

  • Authors:
  • Petra Kudová;Roman Neruda

  • Affiliations:
  • Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, Czech Republic;Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, Czech Republic

  • Venue:
  • Proceedings of the First international conference on Deterministic and Statistical Methods in Machine Learning
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We discuss two kernel based learning methods, namely the Regularization Networks (RN) and the Radial Basis Function (RBF) Networks. The RNs are derived from the regularization theory, they had been studied thoroughly from a function approximation point of view, and they posses a sound theoretical background. The RBF networks represent a model of artificial neural networks with both neuro-physiological and mathematical motivation. In addition they may be treated as a generalized form of Regularization Networks. We demonstrate the performance of both approaches on experiments, including both benchmark and real-life learning tasks. We claim that RN and RBF networks are comparable in terms of generalization error, but they differ with respect to their model complexity. The RN approach usually leads to solutions with higher number of base units, thus, the RBF networks can be used as a 'cheaper' alternative. This allows to utilize the RBF networks in modeling tasks with large amounts of data, such as time series prediction or semantic web classification.