Multilayer feedforward networks are universal approximators
Neural Networks
The nature of statistical learning theory
The nature of statistical learning theory
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Fundamentals of Artificial Neural Networks
Fundamentals of Artificial Neural Networks
Support Vector Machines and the Bayes Rule in Classification
Data Mining and Knowledge Discovery
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Computers and Operations Research
An introduction to boosting and leveraging
Advanced lectures on machine learning
The Journal of Machine Learning Research
An Efficient Hardware Implementation of Feed-Forward Neural Networks
Applied Intelligence
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Multisurface Proximal Support Vector Machine Classification via Generalized Eigenvalues
IEEE Transactions on Pattern Analysis and Machine Intelligence
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Computer aided detection via asymmetric cascade of sparse hyperplane classifiers
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
The Interplay of Optimization and Machine Learning Research
The Journal of Machine Learning Research
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
Classification by vertical and cutting multi-hyperplane decision tree induction
Decision Support Systems
On the generalization of soft margin algorithms
IEEE Transactions on Information Theory
Multisurface method of pattern separation
IEEE Transactions on Information Theory
Hi-index | 0.00 |
This paper describes a novel approach to build a piecewise (non)linear surface that separates individuals from two classes with an a priori classification accuracy. In particular, total classification with a good generalization level can be obtained, provided no individual belongs to both classes. The method is iterative: at each iteration a new piece of the surface is found via the solution of a Linear Programming model. Theoretically, the larger the number of iterations, the better the classification accuracy in the training set; numerically, we also found that the generalization ability does not deteriorate on the cases tested. Nonetheless, we have included a procedure that computes a lower bound to the number of errors that will be generated in any given validation set. If needed, an early stopping criterion is provided. We also showed that each piece of the discriminating surface is equivalent to a neuron of a feed forward neural network (FFNN); so as a byproduct we are providing a novel training scheme for FFNNs that avoids the minimization of non convex functions which, in general, present many local minima. We compare this algorithm with a new linear SVM that needs no pre tuning and has an excellent performance on standard and synthetic data. Highly encouraging numerical results are reported on synthetic examples, on the Japanese Bank dataset, and on medium and small datasets from the Irvine repository of machine learning databases.