On the continuous quadratic knapsack problem
Mathematical Programming: Series A and B
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Introduction to algorithms
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Convex Optimization
Smooth minimization of non-smooth functions
Mathematical Programming: Series A and B
New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds
Mathematical Programming: Series A and B
Efficient Learning of Label Ranking by Soft Projections onto Polyhedra
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
Projected Gradient Methods for Nonnegative Matrix Factorization
Neural Computation
Efficient projections onto the l1-ball for learning in high dimensions
Proceedings of the 25th international conference on Machine learning
An accelerated gradient method for trace norm minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Efficient Euclidean projections in linear time
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An efficient projection for l1, ∞ regularization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Large-scale sparse logistic regression
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Sparse reconstruction by separable approximation
IEEE Transactions on Signal Processing
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Accelerated Gradient Method for Multi-task Sparse Learning Problem
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Efficient Online and Batch Learning Using Forward Backward Splitting
The Journal of Machine Learning Research
Online Learning for Matrix Factorization and Sparse Coding
The Journal of Machine Learning Research
Multi-task feature learning via efficient l2, 1-norm minimization
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
IEEE Transactions on Information Theory
Robust multi-task feature learning
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.01 |
Recently, the gradient (subgradient) projection method, especially by incorporating the idea of Nesterov's method, has aroused more and more attention and achieved great successes on constrained optimization problems arising in the field of machine learning, data mining and signal processing. In the gradient projection method, a critical step is how to efficiently project a vector onto a constraint set. In this paper, we propose a unified method called Piecewise Root Finding (PRF) to efficiently calculate Euclidean projections onto three typical constraint sets: @?"1-ball, Elastic Net (EN) and the Intersection of a Hyperplane and a Halfspace (IHH). In our PRF method, we first formulate a Euclidean projection problem as a root finding problem. Then, a Piecewise Root Finding algorithm is applied to find the root and global convergence is guaranteed. Finally, the Euclidean projection result is obtained as a function of the found root in a closed form. Moreover, the sparsity of the projected vector is considered, leading to reduced computational cost for projection onto the @?"1-ball and EN. Empirical studies demonstrate that our PRF algorithm is efficient by comparing it with several state of the art algorithms for Euclidean projections onto the three typical constraint sets mentioned above. Besides, we apply our efficient Euclidean projection algorithm (PRF) to the Gradient Projection with Nesterov's Method (GPNM), which efficiently solves the popular logistic regression problem with the @?"1-ball/EN/IHH constraint. Experimental results on real-world data sets indicate that GPNM has a fast convergence speed.