Identification of parallelism in neural networks by simulation with language J.

  • Authors:
  • Alexei N. Skurihin;Alvin J. Surkan

  • Affiliations:
  • -;-

  • Venue:
  • APL '93 Proceedings of the international conference on APL
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

Neural networks, trained by backpropagation, are designed and described in the language J, an APL derivative with powerful function encapsulation features. Both the languages J [4,6,7] and APL [5] help to identify and isolate the parallelism that is inherent in network training algorithms. Non-critical details of data input and derived output processes are de-emphasized by relegating those functions to callable stand-alone modules. Such input and output modules can be isolated and customized individually for managing communication with arbitrary, external storage systems. The central objective of this research is the design and precise description of a neural network training kernel. Such kernel designs are valuable for producing efficient reusable computer codes and facilitating the transfer of neural network technology from developers to users.