Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Scale & Affine Invariant Interest Point Detectors
International Journal of Computer Vision
Convex Optimization
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 12 - Volume 12
Object Categorization by Learned Universal Visual Dictionary
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Learning sparse metrics via linear programming
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Low-Rank Kernel Learning with Bregman Matrix Divergences
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
A scalable algorithm for learning a mahalanobis distance metric
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part III
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Random forests for metric learning with implicit pairwise position dependence
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Perceptual relativity-based local hyperplane classification
Neurocomputing
Training mahalanobis kernels by linear programming
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
International Journal of Web Engineering and Technology
Hi-index | 0.00 |
For many machine learning algorithms such as k-Nearest Neighbor (k-NN) classifiers and k-means clustering, often their success heavily depends on the metric used to calculate distances between different data points. An effective solution for defining such a metric is to learn it from a set of labeled training samples. In this work, we propose a fast and scalable algorithm to learn a Mahalanobis distance metric. The Mahalanobis metric can be viewed as the Euclidean distance metric on the input data that have been linearly transformed. By employing the principle of margin maximization to achieve better generalization performances, this algorithm formulates the metric learning as a convex optimization problem and a positive semidefinite (p.s.d.) matrix is the unknown variable. Based on an important theorem that a p.s.d. trace-one matrix can always be represented as a convex combination of multiple rank-one matrices, our algorithm accommodates any differentiable loss function and solves the resulting optimization problem using a specialized gradient descent procedure. During the course of optimization, the proposed algorithm maintains the positive semidefiniteness of the matrix variable that is essential for a Mahalanobis metric. Compared with conventional methods like standard interior-point algorithms [2] or the special solver used in large margin nearest neighbor (LMNN) [24], our algorithm is much more efficient and has a better performance in scalability. Experiments on benchmark data sets suggest that, compared with state-of-the-art metric learning algorithms, our algorithm can achieve a comparable classification accuracy with reduced computational complexity.