Evolvable block-based neural network design for applications in dynamic environments
VLSI Design - Special issue on selected papers from the midwest symposium on circuits and systems
Hi-index | 0.00 |
In many applications, the most significant advantages of neural networks come mainly from their parallel architectures ensuring rather high operation speed. The difficulties of parallel digital hardware implementation arise mostly from the high complexity of the parallel many-multiplier structure. This paper suggests a new bit-serial/parallel neural network implementation method for pre-trained networks. The method makes possible significant hardware cost savings. The proposed approach - which is based on the results of a previously suggested method for efficient implementation of digital filters - uses bit-serial distribute d arithmetic.The efficient implementation of a matrix-vector multiplier is base d on an optimization algorithm, which utilizes the advantages of CSD (Canonic Signed Digit) encoding and bit-level pattern coincidences. The resulting architecture performs full-precision computation and allows high-speed bit-level pipeline operation. The proposed approach seems to be a promising one for FPGA and ASIC realization of pre-trained neural networks and can be integrated into automatic neural network design environments. However, these implementation methods can be useful in many other fields of digital signal processing.