Introduction to Modern Information Retrieval
Introduction to Modern Information Retrieval
Transformation Invariance in Pattern Recognition-Tangent Distance and Tangent Propagation
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Incorporating Invariances in Support Vector Learning Machines
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
Video Google: A Text Retrieval Approach to Object Matching in Videos
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Scale & Affine Invariant Interest Point Detectors
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Object Level Grouping for Video Shots
International Journal of Computer Vision
Photo tourism: exploring photo collections in 3D
ACM SIGGRAPH 2006 Papers
Scalable Recognition with a Vocabulary Tree
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Kernel Codebooks for Scene Categorization
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
Modeling and Recognition of Landmark Image Collections Using Iconic Scene Graphs
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Improving Bag-of-Features for Large Scale Image Search
International Journal of Computer Vision
Point matching as a classification problem for fast and robust object pose estimation
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Hi-index | 0.00 |
We address the problem of large scale image retrieval in a wide-baseline setting, where for any query image all the matching database images will come from very different viewpoints. In such settings traditional bag-of-visual-words approaches are not equipped to handle the significant feature descriptor transformations that occur under large camera motions. In this paper we present a novel approach that includes an offline step of feature matching which allows us to observe how local descriptors transform under large camera motions. These observations are encoded in a graph in the quantized feature space. This graph can be used directly within a soft-assignment feature quantization scheme for image retrieval.