A proximal trust-region algorithm for column generation stabilization
Computers and Operations Research
Computers and Operations Research
Non-smoothness in classification problems
Optimization Methods & Software - THE JOINT EUROPT-OMS CONFERENCE ON OPTIMIZATION, 4-7 JULY, 2007, PRAGUE, CZECH REPUBLIC, PART I
Computing sharp bounds for hard clustering problems on trees
Discrete Applied Mathematics
On the choice of explicit stabilizing terms in column generation
Discrete Applied Mathematics
0-1 reformulations of the multicommodity capacitated network design problem
Discrete Applied Mathematics
A proximal trust-region algorithm for column generation stabilization
Computers and Operations Research
A bundle-type algorithm for routing in telecommunication data networks
Computational Optimization and Applications
Piecewise-quadratic Approximations in Convex Numerical Optimization
SIAM Journal on Optimization
Using the primal-dual interior point algorithm within the branch-price-and-cut method
Computers and Operations Research
A novel hybrid neural learning algorithm using simulated annealing and quasisecant method
AusDM '11 Proceedings of the Ninth Australasian Data Mining Conference - Volume 121
Optimal joint routing and link scheduling for real-time traffic in TDMA Wireless Mesh Networks
Computer Networks: The International Journal of Computer and Telecommunications Networking
Aggregate codifferential method for nonsmooth DC optimization
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
We study a class of generalized bundle methods for which the stabilizing term can be any closed convex function satisfying certain properties. This setting covers several algorithms from the literature that have been so far regarded as distinct. Under a different hypothesis on the stabilizing term and/or the function to be minimized, we prove finite termination, asymptotic convergence, and finite convergence to an optimal point, with or without limits on the number of serious steps and/or requiring the proximal parameter to go to infinity. The convergence proofs leave a high degree of freedom in the crucial implementative features of the algorithm, i.e., the management of the bundle of subgradients ($\beta$-strategy) and of the proximal parameter (t-strategy). We extensively exploit a dual view of bundle methods, which are shown to be a dual ascent approach to one nonlinear problem in an appropriate dual space, where nonlinear subproblems are approximately solved at each step with an inner linearization approach. This allows us to precisely characterize the changes in the subproblems during the serious steps, since the dual problem is not tied to the local concept of $\varepsilon$-subdifferential. For some of the proofs, a generalization of inf-compactness, called *-compactness, is required; this concept is related to that of asymptotically well-behaved functions.