Computational Statistics & Data Analysis
Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
The nature of statistical learning theory
The nature of statistical learning theory
Bayesian Classification With Gaussian Processes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Machine Learning
Numerical Recipes in C: The Art of Scientific Computing
Numerical Recipes in C: The Art of Scientific Computing
Convergence Properties of the Nelder--Mead Simplex Method in Low Dimensions
SIAM Journal on Optimization
On Simulated Annealing and Nested Annealing
Journal of Global Optimization
Reduced Rank Kernel Ridge Regression
Neural Processing Letters
Orthogonal series density estimation and the kernel eigenvalue problem
Neural Computation
A parallel mixture of SVMs for very large scale problems
Neural Computation
SMO algorithm for least-squares SVM formulations
Neural Computation
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Support Vector Machines: Training and Applications
Support Vector Machines: Training and Applications
Improved Fast Gauss Transform and Efficient Kernel Density Estimation
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Benchmarking Least Squares Support Vector Machine Classifiers
Machine Learning
Comments on "A parallel mixture of SVMs for very large scale problems"
Neural Computation
Fast Leave-One-Out Evaluation and Improvement on Inference for LS-SVMs
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Neural Computation
On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning
The Journal of Machine Learning Research
Bounds on Error Expectation for Support Vector Machines
Neural Computation
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
Support vector machines with adaptive Lq penalty
Computational Statistics & Data Analysis
Sparse least squares support vector training in the reduced empirical feature space
Pattern Analysis & Applications
Recursive reduced least squares support vector regression
Pattern Recognition
Very fast simulated re-annealing
Mathematical and Computer Modelling: An International Journal
Fast Sparse Approximation for Least Squares Support Vector Machine
IEEE Transactions on Neural Networks
A mixed effects least squares support vector machine model for classification of longitudinal data
Computational Statistics & Data Analysis
Sparse multikernel support vector regression machines trained by active learning
Expert Systems with Applications: An International Journal
Recurrent sparse support vector regression machines trained by active learning in the time-domain
Expert Systems with Applications: An International Journal
Multi-threaded support vector machines for pattern recognition
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Efficient sparse least squares support vector machines for pattern classification
Computers & Mathematics with Applications
Hi-index | 0.03 |
A modified active subset selection method based on quadratic Renyi entropy and a fast cross-validation for fixed-size least squares support vector machines is proposed for classification and regression with optimized tuning process. The kernel bandwidth of the entropy based selection criterion is optimally determined according to the solve-the-equation plug-in method. Also a fast cross-validation method based on a simple updating scheme is developed. The combination of these two techniques is suitable for handling large scale data sets on standard personal computers. Finally, the performance on test data and computational time of this fixed-size method are compared to those for standard support vector machines and @n-support vector machines resulting in sparser models with lower computational cost and comparable accuracy.