A novel optimizing network architecture with applications

  • Authors:
  • Anand Rangarajan;Steven Gold;Eric Mjolsness

  • Affiliations:
  • Department of Diagnostic Radiology, Yale University, New Haven, CT 06520--8042 USA;Department of Computer Science, Yale University, New Haven, CT 06520--8285 USA;Department of Computer Science and Engineering, University of California San Diego (UCSD), La Jolla, CA 92093--0114 USA

  • Venue:
  • Neural Computation
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a novel optimizing network architecture with applications in vision, learning, pattern recognition, and combinatorial optimization. This architecture is constructed by combining the following techniques: (1) deterministic annealing, (2) self-amplification, (3) algebraic transformations, (4) clocked objectives, and (5) softassign. Deterministic annealing in conjunction with self-amplification avoids poor local minima and ensures that a vertex of the hypercube is reached. Algebraic transformations and clocked objectives help partition the relaxation into distinct phases. The problems considered have doubly stochastic matrix constraints or minor variations thereof. We introduce a new technique, softassign, which is used to satisfy this constraint. Experimental results on different problems are presented and discussed.