Similarity estimation techniques from rounding algorithms
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Support Vector Machine Active Learning with Application sto Text Classification
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Grafting: fast, incremental feature selection by gradient descent in function space
The Journal of Machine Learning Research
Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions
Communications of the ACM - 50th anniversary issue: 1958 - 2008
Locality sensitive hash functions based on concomitant rank order statistics
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Grafting-light: fast, incremental feature selection and structure learning of Markov random fields
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Efficient large-scale image annotation by probabilistic collaborative multi-label propagation
Proceedings of the international conference on Multimedia
Efficient highly over-complete sparse coding using a mixture model
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
Randomized locality sensitive vocabularies for bag-of-features model
ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
Approximate Nearest Subspace Search
IEEE Transactions on Pattern Analysis and Machine Intelligence
Matrix approximation for large-scale learning
Matrix approximation for large-scale learning
Large-scale multilabel propagation based on efficient sparse graph construction
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Hi-index | 0.00 |
Traditional locality-sensitive hashing (LSH) techniques aim to tackle the curse of explosive data scale by guaranteeing that similar samples are projected onto proximal hash buckets. Despite the success of LSH on numerous vision tasks like image retrieval and object matching, however, its potential in large-scale optimization is only realized recently. In this paper we further advance this nascent area. We first identify two common operations known as the computational bottleneck of numerous optimization algorithms in a large-scale setting, i.e., min/max inner product. We propose a hashing scheme for accelerating min/max inner product, which exploits properties of order statistics of statistically correlated random vectors. Compared with other schemes, our algorithm exhibits improved recall at a lower computational cost. The effectiveness and efficiency of the proposed method are corroborated by theoretic analysis and several important applications. Especially, we use the proposed hashing scheme to perform approximate ℓ1 regularized least squares with dictionaries with millions of elements, a scale which is beyond the capability of currently known exact solvers. Nonetheless, it is highlighted that the focus of this paper is not on a new hashing scheme for approximate nearest neighbor problem. It exploits a new application for the hashing techniques and proposes a general framework for accelerating a large variety of optimization procedures in computer vision.