Multilayer feedforward networks are universal approximators
Neural Networks
Approximation capabilities of multilayer feedforward networks
Neural Networks
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Computation with infinite neural networks
Neural Computation
Support vector machines, reproducing kernel Hilbert spaces, and randomized GACV
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
SMO algorithm for least-squares SVM formulations
Neural Computation
Benchmarking Least Squares Support Vector Machine Classifiers
Machine Learning
Convex Optimization
Introduction to Data Mining, (First Edition)
Introduction to Data Mining, (First Edition)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
The Journal of Machine Learning Research
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
Letters: Convex incremental extreme learning machine
Neurocomputing
OP-ELM: Theory, Experiments and a Toolbox
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Sales forecasting using extreme learning machine with applications in fashion retailing
Decision Support Systems
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on cybernetics and cognitive informatics
Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Letters: Fully complex extreme learning machine
Neurocomputing
Extreme support vector machine classifier
PAKDD'08 Proceedings of the 12th Pacific-Asia conference on Advances in knowledge discovery and data mining
Expert Systems with Applications: An International Journal
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Evolution strategies based adaptive Lp LS-SVM
Information Sciences: an International Journal
IEEE Transactions on Neural Networks
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Bounds on the number of hidden neurons in multilayer perceptrons
IEEE Transactions on Neural Networks
A simple method to derive bounds on the size and to train multilayer neural networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper presents how commonly used machine learning classifiers can be analyzed using a common framework of convex optimization. Four classifier models, the Support Vector Machine (SVM), the Least-Squares SVM (LSSVM), the Extreme Learning Machine (ELM), and the Margin Loss ELM (MLELM) are discussed to demonstrate how specific parametrizations of a general problem statement affect the classifier design and performance, and how ideas from the four different classifiers can be mixed and used together. Furthermore, 21 public domain benchmark datasets are used to experimentally evaluate five performance metrics of each model and corroborate the theoretical analysis. Comparison of classification accuracies under a nested cross-validation evaluation shows that with an exception all four models perform similarly on the evaluated datasets. However, the four classifiers command different amounts of computational resources for both testing and training. These requirements are directly linked to their formulations as different convex optimization problems.