Bayesian methods for adaptive models
Bayesian methods for adaptive models
The nature of statistical learning theory
The nature of statistical learning theory
Swarm intelligence
Machine Learning
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Atomic Decomposition by Basis Pursuit
SIAM Review
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Regularization in the selection of radial basis function centers
Neural Computation
International Journal of Systems Science
International Journal of Bio-Inspired Computation
Particle swarm optimization aided orthogonal forward regression for unified data modeling
IEEE Transactions on Evolutionary Computation
Fast orthogonal least squares algorithm for efficient subset modelselection
IEEE Transactions on Signal Processing
Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients
IEEE Transactions on Evolutionary Computation
IEEE Transactions on Neural Networks
RBF neural network center selection based on Fisher ratio class separability measure
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
A novel two-stage construction algorithm for linear-in-the-parameters classifier is proposed, aiming at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage to construct a sparse linear-in-the-parameters classifier. For the first stage learning of generating the prefiltered signal, a two-level algorithm is introduced to maximise the model's generalisation capability, in which an elastic net model identification algorithm using singular value decomposition is employed at the lower level while the two regularisation parameters are selected by maximising the Bayesian evidence using a particle swarm optimization algorithm. Analysis is provided to demonstrate how ''Occam's razor'' is embodied in this approach. The second stage of sparse classifier construction is based on an orthogonal forward regression with the D-optimality algorithm. Extensive experimental results demonstrate that the proposed approach is effective and yields competitive results for noisy data sets.