Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
A Feature Selection Newton Method for Support Vector Machine Classification
Computational Optimization and Applications
epsilon-SSVR: A Smooth Support Vector Machine for epsilon-Insensitive Regression
IEEE Transactions on Knowledge and Data Engineering
International Journal of Systems Science
Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
The Journal of Machine Learning Research
Matching pursuits with time-frequency dictionaries
IEEE Transactions on Signal Processing
Learning long-term dependencies in NARX recurrent neural networks
IEEE Transactions on Neural Networks
Neural-network construction and selection in nonlinear modeling
IEEE Transactions on Neural Networks
Backstepping wavelet neural network control for indirect field-oriented induction motor drive
IEEE Transactions on Neural Networks
A new class of wavelet networks for nonlinear system identification
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Using mutual information for selecting features in supervised neural net learning
IEEE Transactions on Neural Networks
On the sparseness of 1-norm support vector machines
Neural Networks
Density-induced margin support vector machines
Pattern Recognition
Rough sets for adapting wavelet neural networks as a new classifier system
Applied Intelligence
Hi-index | 0.01 |
A 1-norm support vector machine stepwise (SVMS) algorithm is proposed for the hidden neurons selection of wavelet networks (WNs). In this new algorithm, the linear programming support vector machine (LPSVM) is employed to pre-select the hidden neurons, and then a stepwise selection algorithm based on ridge regression is introduced to select hidden neurons from the pre-selection. The main advantages of the new algorithm are that it can get rid of the influence of the ill conditioning of the matrix and deal with the problems that involve a great number of candidate neurons or a large size of samples. Four examples are provided to illustrate the efficiency of the new algorithm.