Similarity Search in High Dimensions via Hashing
VLDB '99 Proceedings of the 25th International Conference on Very Large Data Bases
Locality-sensitive hashing scheme based on p-stable distributions
SCG '04 Proceedings of the twentieth annual symposium on Computational geometry
80 Million Tiny Images: A Large Data Set for Nonparametric Object and Scene Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast Similarity Search for Learned Metrics
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast locality-sensitive hashing
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Locality-Sensitive Hashing for Chi2 Distance
IEEE Transactions on Pattern Analysis and Machine Intelligence
LDAHash: Improved Matching with Smaller Descriptors
IEEE Transactions on Pattern Analysis and Machine Intelligence
Supervised hashing with kernels
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Complementary hashing for approximate nearest neighbor search
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Semi-Supervised Hashing for Large-Scale Search
IEEE Transactions on Pattern Analysis and Machine Intelligence
Active hashing and its application to image and text retrieval
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
Recent years have witnessed the growing popularity of hash function learning for large-scale data search. Although most existing hashing-based methods have been proven to obtain high accuracy, they are regarded as passive hashing and assume that the labelled points are provided in advance. In this paper, we consider updating a hashing model upon gradually increased labelled data in a fast response to users, called smart hashing update (SHU). In order to get a fast response to users, SHU aims to select a small set of hash functions to relearn and only updates the corresponding hash bits of all data points. More specifically, we put forward two selection methods for performing efficient and effective update. In order to reduce the response time for acquiring a stable hashing algorithm, we also propose an accelerated method in order to further reduce interactions between users and the computer. We evaluate our proposals on two benchmark data sets. Our experimental results show it is not necessary to update all hash bits in order to adapt the model to new input data, and meanwhile we obtain better or similar performance without sacrificing much accuracy against the batch mode update.