Neuronal tuning: to sharpen or broaden
Neural Computation
Narrow versus wide turning curves: what's best for a population code?
Neural Computation
Multihypothesis sequential probability ratio tests .I. Asymptotic optimality
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Optimal performance and physically plausible mechanisms for achieving it have been completely characterized for a general class of two-alternative, free response decision making tasks, and data suggest that humans can implement the optimal procedure. The situation is more complicated when the number of alternatives is greater than two and subjects are free to respond at any time, partly due to the fact that there is no generally applicable statistical test for deciding optimally in such cases. However, here, too, analytical approximations to optimality that are physically and psychologically plausible have been analyzed. These analyses leave open questions that have begun to be addressed: (1) How are near-optimal model parameterizations learned from experience? (2) What if a continuum of decision alternatives exists? (3) How can neurons' broad tuning curves be incorporated into an optimal-performance theory? We present a possible answer to all of these questions in the form of an extremely simple, reward-modulated Hebbian learning rule by which a neural network learns to approximate the multihypothesis sequential probability ratio test.