Convergence properties of the softassign quadratic assignment algorithm
Neural Computation
An Efficient Multivalued Hopfield Network for the Traveling Salesman Problem
Neural Processing Letters
Annealed replication: a new heuristic for the maximum clique problem
Discrete Applied Mathematics
Modelling competitive Hopfield networks for the maximum clique problem
Computers and Operations Research
Payoff-Monotonic Game Dynamics and the Maximum Clique Problem
Neural Computation
A novel optimizing network architecture with applications
Neural Computation
Implementing soft computing techniques to solve economic dispatch problem in power systems
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
The often disappointing performance of optimizing neuralnetworks can be partly attributed to the rather ad hoc manner inwhich problems are mapped onto them for solution. In this paper arigorous mapping is described for quadratic 0-1 programmingproblems with linear equality and inequality constraints, thisbeing the most general class of problem such networks can solve.The problem's constraints define a polyhedron P containingall the valid solution points, and the mapping guarantees strictconfinement of the network's state vector to P. However,forcing convergence to a 0-1 point within P is shown to begenerally intractable, rendering the Hopfield and similar modelsinapplicable to the vast majority of problems. A modification ofthe tabu learning technique is presented as a more coherentapproach to general problem solving with neural networks. Whentested on a collection of knapsack problems, the modified dynamicsproduced some very encouraging results.