Hebbian learning in linear-nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making

  • Authors:
  • Tyler McMillen;Patrick Simen;Sam Behseta

  • Affiliations:
  • Department of Mathematics, California State University at Fullerton, Fullerton, CA 92834, United States;Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, United States;Department of Mathematics, California State University at Fullerton, Fullerton, CA 92834, United States

  • Venue:
  • Neural Networks
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Optimal performance and physically plausible mechanisms for achieving it have been completely characterized for a general class of two-alternative, free response decision making tasks, and data suggest that humans can implement the optimal procedure. The situation is more complicated when the number of alternatives is greater than two and subjects are free to respond at any time, partly due to the fact that there is no generally applicable statistical test for deciding optimally in such cases. However, here, too, analytical approximations to optimality that are physically and psychologically plausible have been analyzed. These analyses leave open questions that have begun to be addressed: (1) How are near-optimal model parameterizations learned from experience? (2) What if a continuum of decision alternatives exists? (3) How can neurons' broad tuning curves be incorporated into an optimal-performance theory? We present a possible answer to all of these questions in the form of an extremely simple, reward-modulated Hebbian learning rule by which a neural network learns to approximate the multihypothesis sequential probability ratio test.