Computation of Madalines' Sensitivity to Input and Weight Perturbations
Neural Computation
Viability of analog inner product operations in CMOS imagers
Proceedings of the 20th annual conference on Integrated circuits and systems design
Robust low-sensitivity Adaline neuron based on Continuous Valued Number System
Analog Integrated Circuits and Signal Processing
Analog inner product operations for image compression in 0.35-μm CMOS
Analog Integrated Circuits and Signal Processing
Quantitative measurement for fuzzy system to input and rule perturbations
ICIC'06 Proceedings of the 2006 international conference on Intelligent computing: Part II
IEEE Transactions on Neural Networks
A new definition of sensitivity for RBFNN and its applications to feature reduction
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Sensitivity analysis of madalines to weight perturbation
ICMLC'05 Proceedings of the 4th international conference on Advances in Machine Learning and Cybernetics
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
An important issue in the design and implementation of a neural network is the sensitivity of its output to input and weight perturbations. In this paper, we discuss the sensitivity of the most popular and general feedforward neural networks-multilayer perceptron (MLP). The sensitivity is defined as the mathematical expectation of the output errors of the MLP due to input and weight perturbations with respect to all input and weight values in a given continuous interval. The sensitivity for a single neuron is discussed first and an analytical expression that is a function of the absolute values of input and weight perturbations is approximately derived. Then an algorithm is given to compute the sensitivity for the entire MLP. As intuitively expected, the sensitivity increases with input and weight perturbations, but the increase has an upper bound that is determined by the structural configuration of the MLP, namely the number of neurons per layer and the number of layers. There exists an optimal value for the number of neurons in a layer, which yields the highest sensitivity value. The effect caused by the number of layers is quite unexpected. The sensitivity of a neural network may decrease at first and then almost keeps constant while the number increases