A defect-tolerant accelerator for emerging high-performance applications
Proceedings of the 39th Annual International Symposium on Computer Architecture
Hi-index | 0.00 |
This paper introduces an implementation method of multiple weight as well as neuron fault-tolerant multilayer neural networks. Their fault-tolerance is derived from our extended back propagation learning algorithm called the deep learning method. The method can realize a desired weight as well as neuron fault-tolerance in multilayer neural networks where weight values are floating-point and the sigmoid function is used to calculate neuron output values. In this paper, fault-tolerant multilayer neural networks are implemented as digital circuits where weight values are quantized and the step function is used to calculate neuron output values using the deep learning method, the VHDL notation, and the logic design software QuartusII of Altera Inc. The efficiency of our method is shown in terms of fabrication-time cost, hardware size, neural computing time, generalization property, and so on.