Similarity estimation techniques from rounding algorithms
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Locality-sensitive hashing scheme based on p-stable distributions
SCG '04 Proceedings of the twentieth annual symposium on Computational geometry
Photo tourism: exploring photo collections in 3D
ACM SIGGRAPH 2006 Papers
User performance versus precision measures for simple search tasks
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Restricted Boltzmann machines for collaborative filtering
Proceedings of the 24th international conference on Machine learning
Training structural SVMs when exact inference is intractable
Proceedings of the 25th international conference on Machine learning
Semi-Supervised Nonlinear Hashing Using Bootstrap Sequential Projection Learning
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.00 |
Hash function learning has been recently received more and more attentions in fast search for large scale data. However, existing popular learning based hashing methods are batch-based learning models and thus incur large scale computational problem for learning an optimal model on a large scale of labelled data and cannot handle data which comes sequentially. In this paper, we address the problem by developing an online hashing learning algorithm to get hashing model accommodate to each new pair of data. At the same time the new updated hash model is penalized by the last learned model in order to retain important information learned in previous rounds. We also derive a tight bound for the cumulative loss of our proposed online learning algorithm. The experimental results demonstrate superiority of the proposed online hashing model on searching both metric distance neighbors and semantical similar neighbors in the experiments.