The stabilizing properties of the gradient method
USSR Computational Mathematics and Mathematical Physics
Convergence of some algorithms for convex minimization
Mathematical Programming: Series A and B - Special issue: Festschrift in Honor of Philip Wolfe part II: studies in nonlinear programming
Convergence of the steepest descent method for minimizing quasiconvex functions
Journal of Optimization Theory and Applications
On the projected subgradient method for nonsmooth convex optimization in a Hilbert space
Mathematical Programming: Series A and B
Approximate generalized proximal-type method for convex vector optimization problem in Banach spaces
Computers & Mathematics with Applications
Tikhonov-type regularization method for efficient solutions in vector optimization
Journal of Computational and Applied Mathematics
Generalized viscosity approximation methods in multiobjective optimization problems
Computational Optimization and Applications
Newton-like methods for efficient solutions in vector optimization
Computational Optimization and Applications
A subgradient method for multiobjective optimization
Computational Optimization and Applications
Inexact projected gradient method for vector optimization
Computational Optimization and Applications
Quasi-Newton's method for multiobjective optimization
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
Vector optimization problems are a significant extension of multiobjective optimization, which has a large number of real life applications. In vector optimization the preference order is related to an arbitrary closed and convex cone, rather than the nonnegative orthant. We consider extensions of the projected gradient gradient method to vector optimization, which work directly with vector-valued functions, without using scalar-valued objectives. We provide a direction which adequately substitutes for the projected gradient, and establish results which mirror those available for the scalar-valued case, namely stationarity of the cluster points (if any) without convexity assumptions, and convergence of the full sequence generated by the algorithm to a weakly efficient optimum in the convex case, under mild assumptions. We also prove that our results still hold when the search direction is only approximately computed.