A Full-Parallel Digital Implementation for Pre-Trained NNs

  • Authors:
  • Tamás Szabó;Lörinc Antoni;Gábor Horváth;Béla Fehér

  • Affiliations:
  • -;-;-;-

  • Venue:
  • IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 2 - Volume 2
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

In many applications, the most significant advantages of neural networks come mainly from their parallel architectures ensuring rather high operation speed. The difficulties of parallel digital hardware implementation arise mostly from the high complexity of the parallel many-multiplier structure. This paper suggests a new bit-serial/parallel neural network implementation method for pre-trained networks. The method makes possible significant hardware cost savings. The proposed approach - which is based on the results of a previously suggested method for efficient implementation of digital filters - uses bit-serial distribute d arithmetic.The efficient implementation of a matrix-vector multiplier is base d on an optimization algorithm, which utilizes the advantages of CSD (Canonic Signed Digit) encoding and bit-level pattern coincidences. The resulting architecture performs full-precision computation and allows high-speed bit-level pipeline operation. The proposed approach seems to be a promising one for FPGA and ASIC realization of pre-trained neural networks and can be integrated into automatic neural network design environments. However, these implementation methods can be useful in many other fields of digital signal processing.