Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
A tutorial on spectral clustering
Statistics and Computing
Constructing Category Hierarchies for Visual Recognition
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
Error-Correcting Ouput Codes Library
The Journal of Machine Learning Research
Conditional probability tree estimation analysis and algorithms
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Improving Web Image Search by Bag-Based Reranking
IEEE Transactions on Image Processing
Batch mode Adaptive Multiple Instance Learning for computer vision tasks
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Text-based image retrieval using progressive multi-instance learning
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Discriminative learning of relaxed hierarchy for large-scale visual recognition
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
Recently the maximum margin criterion has been employed to learn a discriminative class hierarchical model, which shows promising performance for rapid multi-class prediction. Specifically, at each node of this hierarchy, a separating hyperplane is learned to split its associated classes from all of the corresponding training data, leading to a time-consuming training process in computer vision applications with many classes such as large-scale object recognition and scene classification. To address this issue, in this paper we propose a new efficient discriminative class hierarchy learning approach for many class prediction. We first present a general objective function to unify the two state-of-the-art methods for multi-class tasks. When there are many classes, this objective function reveals that some classes are indeed redundant. Thus, omitting these redundant classes will not degrade the prediction performance of the learned class hierarchical model. Based on this observation, we decompose the original optimization problem into a sequence of much smaller sub-problems by developing an adaptive classifier updating method and an active class selection strategy. Specifically, we iteratively update the separating hyperplane by efficiently using the training samples only from a limited number of selected classes that are well separated by the current separating hyperplane. Comprehensive experiments on three large-scale datasets demonstrate that our approach can significantly accelerate the training process of the two state-of-the-art methods while achieving comparable prediction performance in terms of both classification accuracy and testing speed.