Neural signal-detection noise benefits based on error probability

  • Authors:
  • Ashok Patel;Bart Kosko

  • Affiliations:
  • Department of Electrical Engineering, Signal and Image Processing Institute, University of Southern California, Los Angeles, CA;Department of Electrical Engineering, Signal and Image Processing Institute, University of Southern California, Los Angeles, CA

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present several necessary and sufficient conditions and a learning algorithm for noise benefits in threshold neural signal detection using error probabilities. The first condition ensures noise benefits in threshold detection of discrete binary signals and applies to noise types from scale families. The condition also gives an easy way to compute optimal noise values for closed-form scale-family noise densities. A related condition ensures noise benefits in threshold detection of signals that have absolutely continuous distributions. This condition reduces to a simple weighted-derivative comparison of the signal densities at the detection threshold when the signal densities are continuously differentiable and when the additive noise is either zero-mean discrete bipolar or finite-variance symmetric scalefamily noise. A gradient-ascent learning algorithm can find the optimal noise value for thick-tailed stable densities and many other noise probability densities that do not have a closed form.