Neural Computation
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
International Journal of Systems Science
Improved GAP-RBF network for classification problems
Neurocomputing
Model selection approaches for non-linear system identification: a review
International Journal of Systems Science
International Journal of Systems Science
Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
IEEE Transactions on Circuits and Systems Part I: Regular Papers
Multiple classifier application to credit risk assessment
Expert Systems with Applications: An International Journal
A dynamic classifier ensemble selection approach for noise data
Information Sciences: an International Journal
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A two-stage algorithm for identification of nonlinear dynamic systems
Automatica (Journal of IFAC)
RBF neural network center selection based on Fisher ratio class separability measure
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper investigates the design of a linear-in-the-parameters (LITP) regression classifier for two-class problems. Most existing algorithms generally learn a classifier (model) from the available training data based on some stopping criterions, such as the Akaike's final prediction error (FPE). The drawback here is that the classifier obtained is then not directly obtained based on its generalization capability. The main objective of this paper is to improve the sparsity and generalization capability of a classifier, while reducing the computational expense in producing it. This is achieved by proposing an automatic two-stage locally regularized classifier construction (TSLRCC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial classifier is then generated by the direct evaluation of these candidates models according to the leave-one-out (LOO) misclassification rate in the first stage. The significance of each selected regressor term is also checked and insignificant ones are replaced in the second stage. To reduce the computational complexity, a proper regression context is defined which allows fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique.