Node perturbation learning without noiseless baseline

  • Authors:
  • Tatsuya Cho;Kentaro Katahira;Kazuo Okanoya;Masato Okada

  • Affiliations:
  • Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Chiba 277-8561, Japan;Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Chiba 277-8561, Japan and Riken Brain Science Institute, Wako, Saitama 351-0198, Japan and Japan Science Technology Agency, ...;Riken Brain Science Institute, Wako, Saitama 351-0198, Japan and Japan Science Technology Agency, ERATO Okanoya Emotional Information Project, Wako, Saitama 351-0198, Japan;Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Chiba 277-8561, Japan and Riken Brain Science Institute, Wako, Saitama 351-0198, Japan and Japan Science Technology Agency, ...

  • Venue:
  • Neural Networks
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Node perturbation learning is a stochastic gradient descent method for neural networks. It estimates the gradient by comparing an evaluation of the perturbed output and the unperturbed output performance, which we call the baseline. Node perturbation learning has primarily been investigated without taking noise on the baseline into consideration. In real biological systems, however, neural activities are intrinsically noisy, and hence, the baseline is likely contaminated with the noise. In this paper, we propose an alternative learning method that does not require such a noiseless baseline. Our method uses a ''second perturbation'', which is calculated with different noise than the first perturbation. By comparing the evaluation of the outcomes with the first perturbation and with the second perturbation, the network weights are updated. We reveal that the learning speed showed only a linear decrease with the variance of the second perturbation. Moreover, using the second perturbation can lead to a decrease in residual error compared to the case of using the noiseless baseline.