Breaking the Synaptic Dogma: Evolving a Neuro-inspired Developmental Network

  • Authors:
  • Gul Muhammad Khan;Julian F. Miller;David M. Halliday

  • Affiliations:
  • Electronics Department, University of York, York, UK YO10 5DD;Electronics Department, University of York, York, UK YO10 5DD;Electronics Department, University of York, York, UK YO10 5DD

  • Venue:
  • SEAL '08 Proceedings of the 7th International Conference on Simulated Evolution and Learning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The majority of artificial neural networks are static and lifeless and do not change themselves within a learning environment. In these models learning is seen as the process of obtaining the strengths of connections between neurons (i.e. weights). We refer to this as the 'synaptic dogma'. This is in marked contrast with biological networks which have time dependent morphology and in which practically all neural aspects can change or be shaped by mutual interactions and interactions with an external environment. Inspired by this and many aspects of neuroscience, we have designed a new kind of neural network. In this model, neurons are represented by seven evolved programs that model particular components and aspects of biological neurons (dendrites, soma, axons, synapses, electrical and developmental behaviour). Each network begins as a small randomly generated network of neurons. When the seven programs are run, the neurons, dendrites, axons and synapses can increase or decrease in number and change in interaction with an external environment. Our aim is to show that it is possible to evolve programs that allow a network to learn through experience (i.e. encode the ability to learn). We report on our continuing investigations in the context of learning how to play checkers.