Radial basis functions for multivariable interpolation: a review
Algorithms for approximation
Information and entropy in strange attractors
Information and entropy in strange attractors
Multilayer feedforward networks are universal approximators
Neural Networks
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Algorithms for better representation and faster learning in radial basis function networks
Advances in neural information processing systems 2
Universal approximation using radial-basis-function networks
Neural Computation
Oriented non-radial basis functions for image coding and analysis
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Handbook of mathematics (3rd ed.)
Handbook of mathematics (3rd ed.)
Self-Organizing Methods in Modeling: Gmdh Type Algorithms
Self-Organizing Methods in Modeling: Gmdh Type Algorithms
Connectionist Learning for Contro: An Overview
Connectionist Learning for Contro: An Overview
International Journal of Business Intelligence and Data Mining
Dimensionality Estimation, Manifold Learning and Function Approximation using Tensor Voting
The Journal of Machine Learning Research
Hi-index | 14.98 |
This paper concerns neural network approaches to function approximation and optimization using linear superposition of Gaussians (or what are popularly known as radial basis function (RBF) networks). The problem of function approximation is one of estimating an underlying function f, given samples of the form ((y/sub i/, x/sub i/); i=1,2,...,n; with y/sub i/=f(x/sub i/)). When the dimension of the input is high and the number of samples small, estimation of the function becomes difficult due to the sparsity of samples in local regions. The authors find that this problem of high dimensionality can be overcome to some extent by using linear transformations of the input in the Gaussian kernels. Such transformations induce intrinsic dimension reduction, and can be exploited for identifying key factors of the input and for the phase space reconstruction of dynamical systems, without explicitly computing the dimension and delay. They present a generalization that uses multiple linear projections onto scalars and successive RBF networks (MLPRBF) that estimate the function based on these scaler values. They derive some key properties of RBF networks that provide suitable grounds for implementing efficient search strategies for nonconvex optimization within the same framework.