Quantified Score

Hi-index 0.00

Visualization

Abstract

The often disappointing performance of optimizing neuralnetworks can be partly attributed to the rather ad hoc manner inwhich problems are mapped onto them for solution. In this paper arigorous mapping is described for quadratic 0-1 programmingproblems with linear equality and inequality constraints, thisbeing the most general class of problem such networks can solve.The problem's constraints define a polyhedron P containingall the valid solution points, and the mapping guarantees strictconfinement of the network's state vector to P. However,forcing convergence to a 0-1 point within P is shown to begenerally intractable, rendering the Hopfield and similar modelsinapplicable to the vast majority of problems. A modification ofthe tabu learning technique is presented as a more coherentapproach to general problem solving with neural networks. Whentested on a collection of knapsack problems, the modified dynamicsproduced some very encouraging results.