Prior Learning and Gibbs Reaction-Diffusion
IEEE Transactions on Pattern Analysis and Machine Intelligence
On Advances in Statistical Modeling of Natural Images
Journal of Mathematical Imaging and Vision
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A robust minimax approach to classification
The Journal of Machine Learning Research
Sparseness of support vector machines
The Journal of Machine Learning Research
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
In Defense of One-Vs-All Classification
The Journal of Machine Learning Research
Fields of Experts: A Framework for Learning Image Priors
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Incorporating the Boltzmann Prior in Object Detection Using SVM
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
International Journal of Computer Vision
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
The Pyramid Match Kernel: Efficient Learning with Sets of Features
The Journal of Machine Learning Research
Cutting-plane training of structural SVMs
Machine Learning
Salient coding for image classification
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Contextualizing object detection and classification
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Large-scale image annotation using visual synset
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
The majority of current methods in object classification use the one-against-rest training scheme. We argue that when applied to a large number of classes, this strategy is problematic: as the number of classes increases, the negative class becomes a very large and complicated collection of images. The resulting classification problem then becomes extremely unbalanced, and kernel SVM classifiers trained on such sets require long training time and are slow in prediction. To address these problems, we propose to consider the negative class as a background and characterize it by a prior distribution. Further, we propose to construct "hybrid" classifiers, which are trained to separate this distribution from the samples of the positive class. A typical classifier first projects (by a function which may be non-linear) the inputs to a one-dimensional space, and then thresholds this projection. Theoretical results and empirical evaluation suggest that, after projection, the background has a relatively simple distribution, which is much easier to parameterize and work with. Our results show that hybrid classifiers offer an advantage over SVM classifiers, both in performance and complexity, especially when the negative (background) class is large.