Bayesian approach for near-duplicate image detection
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Distributed KNN-graph approximation via hashing
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Advanced shape context for plant species identification using leaf image retrieval
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Boosting multi-kernel locality-sensitive hashing for scalable image retrieval
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Scalable mining of small visual objects
Proceedings of the 20th ACM international conference on Multimedia
Visual-based transmedia events detection
Proceedings of the 20th ACM international conference on Multimedia
An android application for leaf-based plant identification
Proceedings of the 3rd ACM conference on International conference on multimedia retrieval
Quadra-Embedding: binary code embedding with low quantization error
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Online multimodal deep similarity learning with application to image retrieval
Proceedings of the 21st ACM international conference on Multimedia
Proceedings of the 21st ACM international conference on Multimedia
Hash Bit Selection Using Markov Process for Approximate Nearest Neighbor Search
Proceedings of International Conference on Advances in Mobile Computing & Multimedia
A unified approximate nearest neighbor search scheme by combining data structure and hashing
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Following the success of hashing methods for multidimensional indexing, more and more works are interested in embedding visual feature space in compact hash codes. Such approaches are not an alternative to using index structures but a complementary way to reduce both the memory usage and the distance computation cost. Several data dependent hash functions have notably been proposed to closely fit data distribution and provide better selectivity than usual random projections such as LSH. However, improvements occur only for relatively small hash code sizes up to 64 or 128 bits. As discussed in the paper, this is mainly due to the lack of independence between the produced hash functions. We introduce a new hash function family that attempts to solve this issue in any kernel space. Rather than boosting the collision probability of close points, our method focus on data scattering. By training purely random splits of the data, regardless the closeness of the training samples, it is indeed possible to generate consistently more independent hash functions. On the other side, the use of large margin classifiers allows to maintain good generalization performances. Experiments show that our new Random Maximum Margin Hashing scheme (RMMH) outperforms four state-of-the-art hashing methods, notably in kernel spaces.