Machine Learning
Machine Learning
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Classification of honeybee pollen using a multiscale texture filtering scheme
Machine Vision and Applications
Geometrical synthesis of MLP neural networks
Neurocomputing
An error-counting network for pattern classification
Neurocomputing
Softdoublemaxminover: perceptron-like training of support vector machines
IEEE Transactions on Neural Networks
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Modern Applied Statistics with S
Modern Applied Statistics with S
Common scab detection on potatoes using an infrared hyperspectral imaging system
ICIAP'11 Proceedings of the 16th international conference on Image analysis and processing - Volume Part II
Automatic detection and classification of grains of pollen based on shape and texture
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
A Hybrid ART-GRNN Online Learning Neural Network With a -Insensitive Loss Function
IEEE Transactions on Neural Networks
Multiconlitron: A General Piecewise Linear Classifier
IEEE Transactions on Neural Networks
The Margitron: A Generalized Perceptron With Margin
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Dynamic ensemble extreme learning machine based on sample entropy
Soft Computing - A Fusion of Foundations, Methodologies and Applications - Special Issue on Extreme Learning Machines (ELM 2011) Hangzhou, China, December 6 – 8, 2011
Hi-index | 0.00 |
The Direct Kernel Perceptron (DKP) (Fernandez-Delgado et al., 2010) is a very simple and fast kernel-based classifier, related to the Support Vector Machine (SVM) and to the Extreme Learning Machine (ELM) (Huang, Wang, & Lan, 2011), whose @a-coefficients are calculated directly, without any iterative training, using an analytical closed-form expression which involves only the training patterns. The DKP, which is inspired by the Direct Parallel Perceptron, (Auer et al., 2008), uses a Gaussian kernel and a linear classifier (perceptron). The weight vector of this classifier in the feature space minimizes an error measure which combines the training error and the hyperplane margin, without any tunable regularization parameter. This weight vector can be translated, using a variable change, to the @a-coefficients, and both are determined without iterative calculations. We calculate solutions using several error functions, achieving the best trade-off between accuracy and efficiency with the linear function. These solutions for the @a coefficients can be considered alternatives to the ELM with a new physical meaning in terms of error and margin: in fact, the linear and quadratic DKP are special cases of the two-class ELM when the regularization parameter C takes the values C=0 and C=~. The linear DKP is extremely efficient and much faster (over a vast collection of 42 benchmark and real-life data sets) than 12 very popular and accurate classifiers including SVM, Multi-Layer Perceptron, Adaboost, Random Forest and Bagging of RPART decision trees, Linear Discriminant Analysis, K-Nearest Neighbors, ELM, Probabilistic Neural Networks, Radial Basis Function neural networks and Generalized ART. Besides, despite its simplicity and extreme efficiency, DKP achieves higher accuracies than 7 out of 12 classifiers, exhibiting small differences with respect to the best ones (SVM, ELM, Adaboost and Random Forest), which are much slower. Thus, the DKP provides an easy and fast way to achieve classification accuracies which are not too far from the best one for a given problem. The C and Matlab code of DKP are freely available.