The nature of statistical learning theory
The nature of statistical learning theory
Matrix computations (3rd ed.)
Lectures on modern convex optimization: analysis, algorithms, and engineering applications
Lectures on modern convex optimization: analysis, algorithms, and engineering applications
Convex Optimization
Online and batch learning of pseudo-metrics
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Smooth minimization of non-smooth functions
Mathematical Programming: Series A and B
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Distance metric learning with eigenvalue optimization
The Journal of Machine Learning Research
Learning a distance metric by empirical loss minimization
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Hi-index | 0.01 |
Distance metric learning (DML) has become a very active research field in recent years. Bian and Tao (IEEE Trans. Neural Netw. Learn. Syst. 23(8) (2012) 1194-1205) presented a constrained empirical risk minimization (ERM) framework for DML. In this paper, we utilize smooth approximation method to make their algorithm applicable to the non-differentiable hinge loss function. We show that the objective function with hinge loss is equivalent to a non-smooth min-max representation, from which an approximate objective function is derived. Compared to the original objective function, the approximate one becomes differentiable with Lipschitz-continuous gradient. Consequently, Nesterov's optimal first-order method can be directly used. Finally, the effectiveness of our method is evaluated on various UCI datasets.