Handwritten digit recognition with a back-propagation network
Advances in neural information processing systems 2
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Inductive learning algorithms and representations for text categorization
Proceedings of the seventh international conference on Information and knowledge management
Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models
Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Text Categorization with Suport Vector Machines: Learning with Many Relevant Features
ECML '98 Proceedings of the 10th European Conference on Machine Learning
Face recognition using independent component analysis and support vector machines
Pattern Recognition Letters - Special issue: Audio- and video-based biometric person authentication (AVBPA 2001)
A Probabilistic Active Support Vector Learning Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust and efficient multiclass SVM models for phrase pattern recognition
Pattern Recognition
Transduction with confidence and credibility
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Support Vector Machines to Define and Detect Agitation Transition
IEEE Transactions on Affective Computing
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Emotion recognition using physiological signals
ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
Hi-index | 0.00 |
Support vector machines (SVM) have been showing high accuracy of prediction in many applications. However, as any statistical learning algorithm, SVM's accuracy drops if some of the training points are contaminated by an unknown source of noise. The choice of clean training points is critical to avoid the overfitting problem which occurs generally when the model is excessively complex, which is reflected by a high accuracy over the training set and a low accuracy over the testing set (unseen points). In this paper we present a new multi-level SVM architecture that splits the training set into points that are labeled as 'easily classifiable' which do not cause an increase in the model complexity and 'non-easily classifiable' which are responsible for increasing the complexity. This method is used to create an SVM architecture that yields on average a higher accuracy than a traditional soft margin SVM trained with the same training set. The architecture is tested on the well known US postal handwritten digit recognition problem, the Wisconsin breast cancer dataset and on the agitation detection dataset. The results show an increase in the overall accuracy for the three datasets. Throughout this paper the word confidence is used to denote the confidence over the decision as commonly used in the literature.