Neural Computation
On Model Selection Consistency of Lasso
The Journal of Machine Learning Research
Decoding by linear programming
IEEE Transactions on Information Theory
A multi-stage framework for Dantzig selector and LASSO
The Journal of Machine Learning Research
Sparse methods for biomedical data
ACM SIGKDD Explorations Newsletter
Robust principal component analysis via capped norms
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Enhancing one-class support vector machines for unsupervised anomaly detection
Proceedings of the ACM SIGKDD Workshop on Outlier Detection and Description
Iterative reweighted algorithms for matrix rank minimization
The Journal of Machine Learning Research
Nonconvex relaxation approaches to robust matrix recovery
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Multi-stage multi-task feature learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
We consider learning formulations with non-convex objective functions that often occur in practical applications. There are two approaches to this problem: Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. Convex relaxation such as L1-regularization that solves the problem under some conditions. However it often leads to a sub-optimal solution in reality. This paper tries to remedy the above gap between theory and practice. In particular, we present a multi-stage convex relaxation scheme for solving problems with non-convex objective functions. For learning formulations with sparse regularization, we analyze the behavior of a specific multi-stage relaxation scheme. Under appropriate conditions, we show that the local solution obtained by this procedure is superior to the global solution of the standard L1 convex relaxation for learning sparse targets.