Machine Learning
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Algorithm for Finding Best Matches in Logarithmic Expected Time
ACM Transactions on Mathematical Software (TOMS)
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Duality and Geometry in SVM Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Support Vector Data Description
Machine Learning
A hierarchical method for multi-class support vector machines
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Neural Networks - 2005 Special issue: IJCNN 2005
Constructing Category Hierarchies for Visual Recognition
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Large scale image clustering with support vector machine based on visual keywords
Proceedings of the Tenth International Workshop on Multimedia Data Mining
Hierarchical image annotation using semantic hierarchies
Proceedings of the 21st ACM international conference on Information and knowledge management
Hyperdisk based large margin classifier
Pattern Recognition
Hi-index | 0.10 |
This study presents two new clustering algorithms for partition of data samples for the support vector machine (SVM) based hierarchical classification. A divisive (top-down) approach is considered in which a set of classes is automatically separated into two smaller groups at each node of the hierarchy. The first algorithm splits the data samples based on a variation of the normalized cuts (NCuts) clustering algorithm wherein the weights of adjacency matrix are modified to utilize class membership in the process. The second algorithm also uses the NCuts clustering; however, it considers the involved classes rather than the individual data samples. It uses the minimum distances between the convex hulls of classes as a distance measure for determining the weights of the graph. Splits are determined for both algorithms based on the eigenvector corresponding to the second smallest eigenvalue of a Laplacian matrix, and it is observed that the proposed algorithms generate well-separated and well-balanced clusters. Unlike other clustering methods used for this purpose, the methods in the present study are found to be more suitable when SVMs are used as base classifiers. As demonstrated in the experiments, the proposed clustering algorithms are integrated into the hierarchical SVM classifiers, which results in significantly improved testing times with a negligible decrease in classification accuracies as compared to the traditional multi-class SVMs.