Learning "forgiving" hash functions: algorithms and large scale tests
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
An adaptable k-nearest neighbors algorithm for MMSE image interpolation
IEEE Transactions on Image Processing
BRIEF: binary robust independent elementary features
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Shape google: Geometric words and expressions for invariant shape retrieval
ACM Transactions on Graphics (TOG)
Compressed Histogram of Gradients: A Low-Bitrate Descriptor
International Journal of Computer Vision
A probabilistic model for multimodal hash function learning
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Thick boundaries in binary space and their influence on nearest-neighbor search
Pattern Recognition Letters
Viewpoint-aware object detection and continuous pose estimation
Image and Vision Computing
Efficient discriminative projections for compact binary descriptors
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Nested sparse quantization for efficient feature coding
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Active hashing and its application to image and text retrieval
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
The right measure of similarity between examples is important in many areas of computer science. In particular it is a critical component in example-based learning methods. Similarity is commonly defined in terms of a conventional distance function, but such a definition does not necessarily capture the inherent meaning of similarity, which tends to depend on the underlying task. We develop an algorithmic approach to learning similarity from examples of what objects are deemed similar according to the task-specific notion of similarity at hand, as well as optional negative examples. Our learning algorithm constructs, in a greedy fashion, an encoding of the data. This encoding can be seen as an embedding into a space, where a weighted Hamming distance is correlated with the unknown similarity. This allows us to predict when two previously unseen examples are similar and, importantly, to efficiently search a very large database for examples similar to a query. This approach is tested on a set of standard machine learning benchmark problems. The model of similarity learned with our algorithm provides and improvement over standard example-based classification and regression. We also apply this framework to problems in computer vision: articulated pose estimation of humans from single images, articulated tracking in video, and matching image regions subject to generic visual similarity. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)