Stochastic Neural Computation II: Soft Competitive Learning
IEEE Transactions on Computers
Fundamentals of Artificial Neural Networks
Fundamentals of Artificial Neural Networks
Reconfigurable hardware for neural networks: binary versus stochastic
Neural Computing and Applications
Compact yet efficient hardware implementation of artificial neural networks with customized topology
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
In this paper, we devise a hardware architecture for ANNs that takes advantage of the dedicated adder blocks, commonly called MACs, to compute both the weighted sum and the activation function. The proposed architecture requires a reduced silicon area considering the fact that the MACs come for free as these are FPGA's built-in cores. The implementation uses integer fixed point arithmetic and operates with fractions to represent real numbers. The hardware is fast because it is massively parallel. Besides, the proposed architecture can adjust itself on-the-fly to the user-defined configuration of the neural network, i.e., the number of layers and neurons per layer of the ANN can be settled with no extra hardware changes.