Entropy-like proximal methods in convex programming
Mathematics of Operations Research
Dominating sets for convex functions with some applications
Journal of Optimization Theory and Applications
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Tikhonov-type regularization method for efficient solutions in vector optimization
Journal of Computational and Applied Mathematics
Generalized viscosity approximation methods in multiobjective optimization problems
Computational Optimization and Applications
Newton-like methods for efficient solutions in vector optimization
Computational Optimization and Applications
A subgradient method for multiobjective optimization
Computational Optimization and Applications
Inexact projected gradient method for vector optimization
Computational Optimization and Applications
Multi Agent Collaborative Search based on Tchebycheff decomposition
Computational Optimization and Applications
Hi-index | 7.29 |
In this work we propose a Cauchy-like method for solving smooth unconstrained vector optimization problems. When the partial order under consideration is the one induced by the nonnegative orthant, we regain the steepest descent method for multicriteria optimization recently proposed by Fliege and Svaiter. We prove that every accumulation point of the generated sequence satisfies a certain first-order necessary condition for optimality, which extends to the vector case the well known ''gradient equal zero'' condition for real-valued minimization. Finally, under some reasonable additional hypotheses, we prove (global) convergence to a weak unconstrained minimizer. As a by-product, we show that the problem of finding a weak constrained minimizer can be viewed as a particular case of the so-called Abstract Equilibrium problem.