Journal of Optimization Theory and Applications
A D. C. Optimization Algorithm for Solving the Trust-Region Subproblem
SIAM Journal on Optimization
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Neural Computation
Combined SVM-Based Feature Selection and Classification
Machine Learning
Convergence Theorems for Generalized Alternating Minimization Procedures
The Journal of Machine Learning Research
Numerical Optimization: Theoretical and Practical Aspects (Universitext)
Numerical Optimization: Theoretical and Practical Aspects (Universitext)
The Journal of Machine Learning Research
Multiplicative Updates for Nonnegative Quadratic Programming
Neural Computation
Sparse eigen methods by D.C. programming
Proceedings of the 24th international conference on Machine learning
Sufficient conditions for the convergence of monotonic mathematicalprogramming algorithms
Journal of Computer and System Sciences
On the convergence of bound optimization algorithms
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Feature extraction based on Lp-norm generalized principal component analysis
Pattern Recognition Letters
Hi-index | 0.00 |
The concave-convex procedure (CCCP) is an iterative algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms, including sparse support vector machines (SVMs), transductive SVMs, and sparse principal component analysis. Though CCCP is widely used in many applications, its convergence behavior has not gotten a lot of specific attention. Yuille and Rangarajan analyzed its convergence in their original paper; however, we believe the analysis is not complete. The convergence of CCCP can be derived from the convergence of the d.c. algorithm (DCA), proposed in the global optimization literature to solve general d.c. programs, whose proof relies on d.c. duality. In this note, we follow a different reasoning and show how Zangwill's global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP. This underlines Zangwill's theory as a powerful and general framework to deal with the convergence issues of iterative algorithms, after also being used to prove the convergence of algorithms like expectation-maximization and generalized alternating minimization. In this note, we provide a rigorous analysis of the convergence of CCCP by addressing two questions: When does CCCP find a local minimum or a stationary point of the d.c. program under consideration? and when does the sequence generated by CCCP converge? We also present an open problem on the issue of local convergence of CCCP.