Neural Computation
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Information Sciences: an International Journal
Multi-innovation stochastic gradient algorithms for multi-input multi-output systems
Digital Signal Processing
IEEE Transactions on Circuits and Systems Part I: Regular Papers
A fast multi-output RBF neural network construction method
Neurocomputing
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A two-stage algorithm for identification of nonlinear dynamic systems
Automatica (Journal of IFAC)
Mathematical and Computer Modelling: An International Journal
Expert Systems with Applications: An International Journal
Selecting radial basis function network centers with recursive orthogonal least squares training
IEEE Transactions on Neural Networks
Orthogonal least squares learning algorithm for radial basis function networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique.