Building a 2D-Compatible Multilayer Neural Network

  • Authors:
  • Bernard Girau

  • Affiliations:
  • -

  • Venue:
  • IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 2 - Volume 2
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Various fast parallel implementations of neural networks have been developed ([13]). The very fine-grain parallelism of neural networks uses many information exchanges, so that it better fits hardware implementations. Configurable hardware devices such as FPGAs 1 (Field Programmable Gate Arrays) offer a cheap compromise between the hardware efficiency of ASICs and the flexibility of a simple software-like handling. Moreover, FPGAs offer several advantages for neural network implementations, such as prototyping and embedding, while FPGA-based implementations may be mapped onto new improved FPGAs (unlike the use of neuroprocessors, which rapidly become outdated). However, the 2D-topology of FPGAs does not allow handling the complex connection graphs of standard neural network models, as well as their numerous area-greedy operators (multipliers, transfer functions). Usual solutions ([3, 7, 15, 5, 4, 2]) handle sequentialized computations with a FPGA used as a small neuroprocessor, or they implement very small low-precision neural networks without on-chip learning. Connectivity problems are not solved even by the use of several FPGAs with a bit-serial arithmetic ([6]), or by the use of small-area stochastic bitstream operators (stochastic bit-stream in [1, 17], or frequency-based in [12]).An upstream work is needed: neural computation paradigms may be defined to counterbalance the topological problems, and the use of such paradigms naturally leads to neural models that are more tolerant of hardware constraints. The theoretical and practical framework developed in [9] aims at developing such neural architectures that are easy to map onto FPGAs, thanks to a simplified topology and an original data exchange scheme. These Field Programmable Neural Arrays (FPNAs) reconcile the high connection density of neural architectures with the need of a limited interconnection scheme in hardware implementations.A global study of FPNAs may be found in [9]. The use of FPGAs for neural network implementations is justified in [11]. This paper focuses on the FPNA-based simplification of the architecture of a multilayer shortcut perceptron used in a pattern classification problem. Section 2 shortly defines FPNAs along with their computation algorithm. Section 3 describes the way FPNAs have been applied to the Proben 1:diabetes problem, as well as the classification and implementation performances that have been reached.