Subgradient-based neural networks for nonsmooth nonconvex optimization problems

  • Authors:
  • Wei Bian;Xiaoping Xue

  • Affiliations:
  • Department of Mathematics, Harbin Institute of Technology, Harbin, China;Department of Mathematics, Harbin Institute of Technology, Harbin, China

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a subgradient-based neural network to solve a nonsmooth nonconvex optimization problem with a nonsmooth nonconvex objective function, a class of affine equality constraints, and a class of nonsmooth convex inequality constraints. The proposed neural network is modeled with a differential inclusion. Under a suitable assumption on the constraint set and a proper assumption on the objective function, it is proved that for a sufficiently large penalty parameter, there exists a unique global solution to the neural network and the trajectory of the network can reach the feasible region in finite time and stay there thereafter. It is proved that the trajectory of the neural network converges to the set which consists of the equilibrium points of the neural network, and coincides with the set which consists of the critical points of the objective function in the feasible region. A condition is given to ensure the convergence to the equilibrium point set in finite time. Moreover, under suitable assumptions, the coincidence between the solution to the differential inclusion and the "slow solution" of it is also proved. Furthermore, three typical examples are given to present the effectiveness of the theoretic results obtained in this paper and the good performance of the proposed neural network.