Neural network training based on FPGA with floating point number format and it’s performance

  • Authors:
  • Mehmet Ali Çavuşlu;Cihan Karakuzu;Suhap Şahin;Mehmet Yakut

  • Affiliations:
  • Y-Vizyon Sinyalizasyon Tic. Ltd. Şti., Hacettepe Teknokent 3. ARGE Binası No: 13, 06800, Beytepe, Ankara, Turkey;Kocaeli University, Department of Electronics and Telecommunication Engineering, Faculty of Engineering, 41380, Umuttepe, İzmit, Turkey;Kocaeli University, Department of Computer Engineering, Faculty of Engineering, 41380, Umuttepe, İzmit, Turkey;Kocaeli University, Department of Electronics and Telecommunication Engineering, Faculty of Engineering, 41380, Umuttepe, İzmit, Turkey

  • Venue:
  • Neural Computing and Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, two-layered feed forward artificial neural network’s (ANN) training by back propagation and its implementation on FPGA (field programmable gate array) using floating point number format with different bit lengths are remarked based on EX-OR problem. In the study, being suitable with the parallel data-processing specification on ANN’s nature, it is especially ensured to realize ANN training operations parallel over FPGA. On the training, Virtex2vp30 chip of Xilinx FPGA family is used. The network created on FPGA is coded by using VHDL. By comparing the results to available literature, the technique developed here proved to consume less space for the subjected ANN training which has the same structure and bit length, it is shown to have better performance.