Neural Computation
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
International Journal of Systems Science
Improved GAP-RBF network for classification problems
Neurocomputing
Letters: Convex incremental extreme learning machine
Neurocomputing
Model selection approaches for non-linear system identification: a review
International Journal of Systems Science
International Journal of Systems Science
Construction of tunable radial basis function networks using orthogonal forward selection
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Neuro-controller design for nonlinear fighter aircraft maneuver using fully tuned RBF networks
Automatica (Journal of IFAC)
A two-stage algorithm for identification of nonlinear dynamic systems
Automatica (Journal of IFAC)
Conditional fuzzy clustering in the design of radial basis function neural networks
IEEE Transactions on Neural Networks
Output feedback control of nonlinear systems using RBF neural networks
IEEE Transactions on Neural Networks
Selecting radial basis function network centers with recursive orthogonal least squares training
IEEE Transactions on Neural Networks
Identification and control of dynamical systems using neural networks
IEEE Transactions on Neural Networks
A parameter optimization method for radial basis function type models
IEEE Transactions on Neural Networks
A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper investigates automatic construction of radial basis function (RBF) neural models for nonlinear dynamic systems. The main objective is to automatically and effectively produce a parsimonious RBF neural model that generalizes well. This is achieved by proposing a locally regularized automatic construction (LRAC) method which combines a recently proposed fast recursive algorithm (FRA) with the leave-one-out (LOO) cross-validation criterion. The new method offers distinctive advantages over existing approaches. Firstly, it uses an error criterion where the original model parameters are regularized, in contrast to orthogonal least square (OLS) based approaches where transformed model parameters are regularized. This enables the determination of the significance of each original candidate center and produces a compact neural model. Further, it can automatically determine the network size by the iteratively minimizing a LOO mean-square-error (MSE) without the need to specify any additional termination criterion. Finally, by defining a proper regression context, the whole network construction process can be concisely formulated and easily implemented with significantly reduced computation. An analysis of computational complexity confirms the efficiency of the proposed method, and simulation results reveal its effectiveness in comparison with alternative approaches for producing sparse RBF neural models.