Neural networks for optimization problems with inequality constraints: the knapsack problem

  • Authors:
  • Mattais Ohlsson;Carsten Peterson;Bo Söderberg

  • Affiliations:
  • -;-;-

  • Venue:
  • Neural Computation
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

A strategy for finding approximate solutions to discreteoptimization problems with inequality constraints using mean fieldneural networks is presented. The constraints x d 0 areencoded by x–(x) terms in the energy function.A careful treatment of the mean field approximation for theself-coupling parts of the energy is crucial, and results in anessentially parameter-free algorithm. This methodology isextensively tested on the knapsack problem of size up to103 items. The algorithm scales like NM forproblems with N items and M constraints. Comparisonsare made with an exact branch and bound algorithm when this iscomputationally possible (N d 30). The quality of the neuralnetwork solutions consistently lies above 95% of the optimal onesat a significantly lower CPU expense. For the larger problem sizesthe algorithm is compared with simulated annealing and a modifiedlinear programming approach. For "nonhomogeneous" problems theseproduce good solutions, whereas for the more difficult"homogeneous" problems the neural approach is a winner with respectto solution quality and/or CPU time consumption. The approach is ofcourse also applicable to other problems of similar structure, likeset covering.