Investigating the Fault Tolerance of Neural Networks

  • Authors:
  • Elko B. Tchernev;Rory G. Mulvaney;Dhananjay S. Phatak

  • Affiliations:
  • Computer Science and Electrical Engineering Department, University of Maryland Baltimore County, Baltimore, MD 21250, U.S.A.;Computer Science and Electrical Engineering Department, University of Maryland Baltimore County, Baltimore, MD 21250, U.S.A.;Computer Science and Electrical Engineering Department, University of Maryland Baltimore County, Baltimore, MD 21250, U.S.A.

  • Venue:
  • Neural Computation
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). This letter investigates the method of achieving the highest PFT per network size (total number of units and connections) for classification problems. It concludes that for nontoy problems, there exists a normally trained network of optimal size that produces the smallest fully fault-tolerant network when replicated. In addition, it shows that for particular network sizes, the best level of PFT is achieved by training a network of that size for fault tolerance. The results and discussion demonstrate how the outcome depends on the levels of saturation of the network nodes when classifying data points. With simple training tasks, where the complexity of the problem and the size of the network are well within the ability of the training method, the hidden-layer nodes operate close to their saturation points, and classification is clean. Under such circumstances, replicating the smallest normally trained correct network yields the highest PFT for any given network size. For hard training tasks (difficult classification problems or network sizes close to the minimum), normal training obtains networks that do not operate close to their saturation points, and outputs are not as close to their targets. In this case, training a larger network for fault tolerance yields better PFT than replicating a smaller, normally trained network. However, since fault-tolerant training on its own produces networks that operate closer to their linear areas than normal training, replicating normally trained networks ultimately leads to better PFT than replicating fault-tolerant networks of the same initial size.