Novel multiclass classifiers based on the minimization of the within-class variance
IEEE Transactions on Neural Networks
Kernel-matching pursuits with arbitrary loss functions
IEEE Transactions on Neural Networks
Hand-Drawn Shape Recognition Using the SVM'ed Kernel
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Analysis of the distance between two classes for tuning SVM hyperparameters
IEEE Transactions on Neural Networks
Expert Systems with Applications: An International Journal
Improved support vector machines with distance metric learning
ACIVS'11 Proceedings of the 13th international conference on Advanced concepts for intelligent vision systems
Information Sciences: an International Journal
Twin Mahalanobis distance-based support vector machines for pattern recognition
Information Sciences: an International Journal
Expert Systems with Applications: An International Journal
Parsimonious Mahalanobis kernel for the classification of high dimensional data
Pattern Recognition
Training mahalanobis kernels by linear programming
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Hi-index | 0.01 |
The support vector machine (SVM) has been demonstrated to be a very effective classifier in many applications, but its performance is still limited as the data distribution information is underutilized in determining the decision hyperplane. Most of the existing kernels employed in nonlinear SVMs measure the similarity between a pair of pattern images based on the Euclidean inner product or the Euclidean distance of corresponding input patterns, which ignores data distribution tendency and makes the SVM essentially a ldquolocalrdquo classifier. In this paper, we provide a step toward a paradigm of kernels by incorporating data specific knowledge into existing kernels. We first find the data structure for each class adaptively in the input space via agglomerative hierarchical clustering (AHC), and then construct the weighted Mahalanobis distance (WMD) kernels using the detected data distribution information. In WMD kernels, the similarity between two pattern images is determined not only by the Mahalanobis distance (MD) between their corresponding input patterns but also by the sizes of the clusters they reside in. Although WMD kernels are not guaranteed to be positive definite (pd) or conditionally positive definite (cpd), satisfactory classification results can still be achieved because regularizers in SVMs with WMD kernels are empirically positive in pseudo-Euclidean (pE) spaces. Experimental results on both synthetic and real-world data sets show the effectiveness of ldquopluggingrdquo data structure into existing kernels.