A massively parallel architecture for a self-organizing neural pattern recognition machine
Computer Vision, Graphics, and Image Processing
A resource-allocating network for function interpolation
Neural Computation
A function estimation approach to sequential learning with neural networks
Neural Computation
Machine Learning
A pruning method for the recursive least squared algorithm
Neural Networks
Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Kalman Filtering and Neural Networks
Kalman Filtering and Neural Networks
Radial Basis Function Neural Networks with Sequential Learning
Radial Basis Function Neural Networks with Sequential Learning
Modelling high-dimensional data by mixtures of factor analyzers
Computational Statistics & Data Analysis
Variational Relevance Vector Machines
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Editorial: Advances in Mixture Models
Computational Statistics & Data Analysis
Improved GAP-RBF network for classification problems
Neurocomputing
Regularization in the selection of radial basis function centers
Neural Computation
Projection pursuit mixture density estimation
IEEE Transactions on Signal Processing
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
On the regularization of forgetting recursive least square
IEEE Transactions on Neural Networks
A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation
IEEE Transactions on Neural Networks
An incremental training method for the probabilistic RBF network
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Unsupervised Learning of Gaussian Mixtures Based on Variational Component Splitting
IEEE Transactions on Neural Networks
Incremental Learning of Chunk Data for Online Pattern Classification Systems
IEEE Transactions on Neural Networks
A Constrained Optimization Approach to Preserving Prior Knowledge During Incremental Training
IEEE Transactions on Neural Networks
Indirect adaptive self-organizing RBF neural controller design with a dynamical training approach
Expert Systems with Applications: An International Journal
Adaptive dynamic RBF neural controller design for a class of nonlinear systems
Applied Soft Computing
Robotics and Computer-Integrated Manufacturing
Information Sciences: an International Journal
Parallel computation of a new data driven algorithm for training neural networks
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
A recently published generalized growing and pruning (GGAP) training algorithm for radial basis function (RBF) neural networks is studied and modified. GGAP is a resource-allocating network (RAN) algorithm, which means that a created network unit that consistently makes little contribution to the network's performance can be removed during the training. GGAP states a formula for computing the significance of the network units, which requires a d-fold numerical integration for arbitrary probability density function P(X) of the input data X(X ∈ Rd). In this work, the GGAP formula is approximated using a Gaussian mixture model (GMM) for P(X) and an analytical solution of the approximated unit significance is derived. This makes it possible to employ the modified GGAP for input data having complex and high-dimensional P(X), which was not possible in the original GGAP. The results of an extensive experimental study show that the modified algorithm outperforms the original GGAP achieving both a lower prediction error and reduced complexity of the trained network.