On the sum of the largest eigenvalues of a symmetric matrix
SIAM Journal on Matrix Analysis and Applications
Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization
ACM Transactions on Mathematical Software (TOMS)
Linear Programming Boosting via Column Generation
Machine Learning
Theoretical Views of Boosting and Applications
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Scale & Affine Invariant Interest Point Detectors
International Journal of Computer Vision
Convex Optimization
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Boosting as a Regularized Path to a Maximum Margin Classifier
The Journal of Machine Learning Research
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning a Mahalanobis Metric from Equivalence Constraints
The Journal of Machine Learning Research
Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection
The Journal of Machine Learning Research
Object Categorization by Learned Universal Visual Dictionary
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
One-Shot Learning of Object Categories
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Learning of Image Manifolds by Semidefinite Programming
International Journal of Computer Vision
Totally corrective boosting algorithms that maximize the margin
ICML '06 Proceedings of the 23rd international conference on Machine learning
Learning sparse metrics via linear programming
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Selected Topics in Column Generation
Operations Research
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Distance Learning for Similarity Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
Image-to-class distance metric learning for image classification
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
On the Dual Formulation of Boosting Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Similarity scores based on background samples
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part II
Sequential greedy approximation for certain convex optimization problems
IEEE Transactions on Information Theory
Hi-index | 0.00 |
The success of many machine learning and pattern recognition methods relies heavily upon the identification of an appropriate distance metric on the input data. It is often beneficial to learn such a metric from the input training data, instead of using a default one such as the Euclidean distance. In this work, we propose a boosting-based technique, termed BOOSTMETRIC, for learning a quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance metric requires enforcing the constraint that the matrix parameter to the metric remains positive semidefinite. Semidefinite programming is often used to enforce this constraint, but does not scale well and is not easy to implement. BOOSTMETRIC is instead based on the observation that any positive semidefinite matrix can be decomposed into a linear combination of trace-one rank-one matrices. BOOSTMETRIC thus uses rank-one positive semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting methods are easy to implement, efficient, and can accommodate various types of constraints. We extend traditional boosting algorithms in that its weak learner is a positive semidefinite matrix with trace and rank being one rather than a classifier or regressor. Experiments on various data sets demonstrate that the proposed algorithms compare favorably to those state-of-the-art methods in terms of classification accuracy and running time.