Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
One-Shot Learning of Object Categories
IEEE Transactions on Pattern Analysis and Machine Intelligence
Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image-to-class distance metric learning for image classification
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Towards optimal naive bayes nearest neighbor
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Geometric $/ell$_p-norm feature pooling for image classification
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Local Naive Bayes Nearest Neighbor for image classification
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Iterative Nearest Neighbors for classification and dimensionality reduction
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Sparse representation or collaborative representation: Which helps face recognition?
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
Naive Bayes Nearest Neighbor (NBNN) has been proposed as a powerful, learning-free, non-parametric approach for object classification. Its good performance is mainly due to the avoidance of a vector quantization step, and the use of image-to-class comparisons, yielding good generalization. In this paper we study the replacement of the nearest neighbor part with more elaborate and robust (sparse) representations, as well as trading performance for speed for practical purposes. The representations investigated are k-Nearest Neighbors (kNN), Iterative Nearest Neighbors (INN) solving a constrained least squares (LS) problem, Local Linear Embedding (LLE), a Sparse Representation obtained by l1-regularized LS ($SR_{l_1}$), and a Collaborative Representation obtained as the solution of a l2-regularized LS problem ($CR_{l_2}$). In particular, NIMBLE and K-DES descriptors proved viable alternatives to SIFT and, the NB$SR_{l_1}$ and NBINN classifiers provide significant improvements over NBNN, obtaining competitive results on Scene-15, Caltech-101, and PASCAL VOC 2007 datasets, while remaining learning-free approaches (i.e., no parameters need to be learned).