Learning Algorithms Which Make Multilayer Neural Networks Multiple-Weight-and-Neuron-Fault Tolerant
IEICE - Transactions on Information and Systems
Hi-index | 0.00 |
To make a neural network fault-tolerant, Tan et al. proposed a learning algorithm which injects intentionally the snapping of a wire one by one into a network (1992, 1992, 1993). This paper proposes a learning algorithm that injects intentionally stuck-at faults to neurons. Then by computer simulations, we investigate the recognition rate in terms of the number of snapping faults and reliabilities of lines and the learning cycle. The results show that our method is more efficient and useful than the method of Tan et al. Furthermore, we investigate the internal structure in terms of ditribution of correlations between input values of a output neuron for the respective learning methods and show that there is a significant difference of the distributions among the methods.