Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Scale & Affine Invariant Interest Point Detectors
International Journal of Computer Vision
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 12 - Volume 12
Object Categorization by Learned Universal Visual Dictionary
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
IEEE Transactions on Pattern Analysis and Machine Intelligence
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Scalable large-margin Mahalanobis distance metric learning
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
A distance metric that can accurately reflect the intrinsic characteristics of data is critical for visual recognition tasks An effective solution to defining such a metric is to learn it from a set of training samples In this work, we propose a fast and scalable algorithm to learn a Mahalanobis distance By employing the principle of margin maximization to secure better generalization performances, this algorithm formulates the metric learning as a convex optimization problem with a positive semidefinite (psd) matrix variable Based on an important theorem that a psd matrix with trace of one can always be represented as a convex combination of multiple rank-one matrices, our algorithm employs a differentiable loss function and solves the above convex optimization with gradient descent methods This algorithm not only naturally maintains the psd requirement of the matrix variable that is essential for metric learning, but also significantly cuts down computational overhead, making it much more efficient with the increasing dimensions of feature vectors Experimental study on benchmark data sets indicates that, compared with the existing metric learning algorithms, our algorithm can achieve higher classification accuracy with much less computational load.