Evolution of recollection and prediction in neural networks

  • Authors:
  • Ji Ryang Chung;Jaerock Kwon;Yoonsuck Choe

  • Affiliations:
  • Department of Computer Science and Engineering, Texas A&M University, College Station, TX;Department of Computer Science and Engineering, Texas A&M University, College Station, TX;Department of Computer Science and Engineering, Texas A&M University, College Station, TX

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A large number of neural network models are based on a feedforward topology (perceptrons, backpropagation networks, radial basis functions, support vector machines, etc.), thus lacking dynamics. In such networks, the order of input presentation is meaningless (i.e., it does not affect the behavior) since the behavior is largely reactive. That is, such neural networks can only operate in the present, having no access to the past or the future. However, biological neural networks are mostly constructed with a recurrent topology, and recurrent (artificial) neural network models are able to exhibit rich temporal dynamics, thus time becomes an essential factor in their operation. In this paper, we will investigate the emergence of recollection and prediction in evolving neural networks. First, we will show how reactive, feedforward networks can evolve a memory-like function (recollection) through utilizing external markers dropped and detected in the environment. Second, we will investigate how recurrent networks with more predictable internal state trajectory can emerge as an eventual winner in evolutionary struggle when competing networks with less predictable trajectory show the same level of behavioral performance. We expect our results to help us better understand the evolutionary origin of recollection and prediction in neuronal networks, and better appreciate the role of time in brain function.