Optimizing neural networks on SIMD parallel computers

  • Authors:
  • Andrea Di Blas;Arun Jagota;Richard Hughey

  • Affiliations:
  • Department of Computer Engineering, Baskin School of Engineering, University of California, Santa Cruz, United States;Department of Computer Engineering, Baskin School of Engineering, University of California, Santa Cruz, United States;Department of Computer Engineering, Baskin School of Engineering, University of California, Santa Cruz, United States

  • Venue:
  • Parallel Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hopfield neural networks are often used to solve difficult combinatorial optimization problems. Multiple restarts versions find better solutions but are slow on serial computers. Here, we study two parallel implementations on SIMD computers of multiple restarts Hopfield networks for solving the maximum clique problem. The first one is a fine-grained implementation on the Kestrel Parallel Processor, a linear SIMD array designed and built the University of California, Santa Cruz. The second one is an implementation on the MasPar MP-2 according to the ''SIMD Phase Programming Model'', a new method to solve asynchronous, irregular problems on SIMD machines. We find that the neural networks map well to the parallel architectures and afford substantial speedups with respect to the serial program, without sacrificing solution quality.