The nature of statistical learning theory
The nature of statistical learning theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Machine Learning
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
A Simple Decomposition Method for Support Vector Machines
Machine Learning
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Ho--Kashyap classifier with generalization control
Pattern Recognition Letters
Lagrangian support vector machines
The Journal of Machine Learning Research
Links between perceptrons, MLPs and SVMs
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Temporal evolution of generalization during learning in linear networks
Neural Computation
Hi-index | 0.10 |
This paper focuses on linear classification using a fast and simple algorithm known as the Ho-Kashyap learning rule (HK). In order to avoid overfitting and instead of adding a regularization parameter in the criterion, early stopping is introduced as a regularization method for HK learning, which becomes HKES (Ho-Kashyap with early stopping). Furthermore, an automatic procedure, based on the generalization error estimation, is proposed to tune the stopping time. The method is then tested and compared to others (including SVM and LSVM), that use either @?"1 or @?"2-norm of the errors, on well-known benchmarks. The results show the limits of early stopping for regularization with respect to the generalization error estimation and the drawbacks of low level hyperparameters such as a number of iterations.