An implementation of network learning on the Connection Machine
Connectionist models and their implications: readings from cognitive science
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
An efficient implementation of the back-propagation algorithm on the connection machine CM-2
Advances in neural information processing systems 2
Algorithmic mapping of neural network Models onto Parallel SIMD Machines
IEEE Transactions on Computers - Special issue on artificial neural networks
Computer architecture: single and parallel systems
Computer architecture: single and parallel systems
Implementing regularly structured neural networks on the DREAM machine
IEEE Transactions on Neural Networks
Backpropagation in linear arrays-a performance analysis and optimization
IEEE Transactions on Neural Networks
Artificial neural network implementation on a single FPGA of a pipelined on-line backpropagation
ISSS '00 Proceedings of the 13th international symposium on System synthesis
The Role of the Embedded Memories in the Implementation of Artificial Neural Networks
FPL '00 Proceedings of the The Roadmap to Reconfigurable Computing, 10th International Workshop on Field-Programmable Logic and Applications
Hardware Design of an Adaptive Neuro-fuzzy Network with On-Chip Learning Capability
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Part II--Advances in Neural Networks
Hi-index | 0.00 |
The paper describes the implementation of a systolic array for a multilayer perceptron with a hardware-friendly learning algorithm. A pipelined adaptation of the on-line backpropagation algorithm is shown. It better exploits the parallelism because both the forward and backward phases can be performed simultaneously. As a result, a combined systolic array structure is proposed for both phases. Analytic expressions show that the pipelined version is more efficient than the non-pipelined version. The design is simulated using VHDL at different levels of abstraction to solve three databases and the experimental results agree with analytical estimates. Furthermore, the speed of convergence, the generalization capability and the precision required for both versions are evaluated in order to discuss the neural network performance for the proposed variation - compared with the standard so-called on-line backpropagation algorithm.