Accuracy vs. Precision in Digital VLSI Architectures for Signal Processing
IEEE Transactions on Computers
Randomized Algorithms: A System-Level, Poly-Time Analysis of Robust Computation
IEEE Transactions on Computers
Computation of Madalines' Sensitivity to Input and Weight Perturbations
Neural Computation
A feed-forward time-multiplexed neural network with mixed-signal neuron-synapse arrays
Microelectronic Engineering
Robust low-sensitivity Adaline neuron based on Continuous Valued Number System
Analog Integrated Circuits and Signal Processing
Sensitivity analysis of multilayer perceptron
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
IEEE Transactions on Neural Networks
On the selection of weight decay parameter for faulty networks
IEEE Transactions on Neural Networks
Quantitative measurement for fuzzy system to input and rule perturbations
ICIC'06 Proceedings of the 2006 international conference on Intelligent computing: Part II
Resistive-type CVNS distributed neural networks with improved noise-to-signal ratio
IEEE Transactions on Circuits and Systems II: Express Briefs
IEEE Transactions on Neural Networks
A new definition of sensitivity for RBFNN and its applications to feature reduction
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Sensitivity analysis of madalines to weight perturbation
ICMLC'05 Proceedings of the 4th international conference on Advances in Machine Learning and Cybernetics
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.01 |
The sensitivity of the outputs of a neural network to perturbations in its weights is an important consideration in both the design of hardware realizations and in the development of training algorithms for neural networks. In designing dense, high-speed realizations of neural networks, understanding the consequences of using simple neurons with significant weight errors is important. Similarly, in developing training algorithms, it is important to understand the effects of small weight changes to determine the required precision of the weight updates at each iteration. In this paper, an analysis of the sensitivity of feedforward neural networks (Madalines) to weight errors is considered. We focus our attention on Madalines composed of sigmoidal, threshold, and linear units. Using a stochastic model for weight errors, we derive simple analytical expressions for the variance of the output error of a Madaline. These analytical expressions agree closely with simulation results. In addition, we develop a technique for selecting the appropriate accuracy of the weights in a neural network realization. Using this technique, we compare the required weight precision for threshold versus sigmoidal Madalines. We show that for a given desired variance of the output error, the weights of a threshold Madaline must be more accurate