Machine Learning
A robust minimax approach to classification
The Journal of Machine Learning Research
Learning large margin classifiers locally and globally
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Twin Support Vector Machines for Pattern Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Structured large margin machines: sensitive to data distributions
Machine Learning
SVM-based active feedback in image retrieval using clustering and unlabeled data
Pattern Recognition
Structural Support Vector Machine
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Model selection for the LS-SVM. Application to handwriting recognition
Pattern Recognition
A novel SVM+NDA model for classification with an application to face recognition
Pattern Recognition
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
Structural Regularized Support Vector Machine: A Framework for Structural Large Margin Classifier
IEEE Transactions on Neural Networks
Improvements on Twin Support Vector Machines
IEEE Transactions on Neural Networks
Robust twin support vector machine for pattern classification
Pattern Recognition
Twin support vector machine with Universum data
Neural Networks
A proximal classifier with consistency
Knowledge-Based Systems
Large-scale linear nonparallel support vector machine solver
Neural Networks
Hi-index | 0.00 |
It has been shown that the structural information of data may contain useful prior domain knowledge for training a classifier. How to apply the structural information of data to build a good classifier is a new research focus recently. As we all know, the all existing structural large margin methods are the common in considering all structural information within classes into one model. In fact, these methods do not balance all structural information's relationships both infra-class and inter-class, which directly results in these prior information not being exploited sufficiently. In this paper, we design a new Structural Twin Support Vector Machine (called S-TWSVM). Unlike existing methods based on structural information, S-TWSVM uses two hyperplanes to decide the category of new data, of which each model only considers one class's structural information and closer to the class at the same time far away from the other class. This makes S-TWSVM fully exploit these prior knowledge to directly improve the algorithm's the capacity of generalization. All experiments show that our proposed method is rigidly superior to the state-of-the-art algorithms based on structural information of data in both computation time and classification accuracy.