Neuroscience inspired architecture for neural computing

  • Authors:
  • Subha Fernando;Koichi Yamada;Ashu Marasinghe

  • Affiliations:
  • Nagaoka University of Technology, Kamitomioka, Nagaoka, Niigata, Japan;Nagaoka University of Technology, Kamitomioka, Nagaoka, Niigata, Japan;Nagaoka University of Technology, Kamitomioka, Nagaoka, Niigata, Japan

  • Venue:
  • Proceedings of the 13th International Conference on Humans and Computers
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes new architecture for neural computing. The modeled architecture consists of neurons which has large number of synapses. These synapses not just make connections between neurons but are capable of computing their excitations level themselves and adjust those connections. Their excitations depend on the activities due to short-term and long-term plasticity mechanisms. Indeed our modeled synapses express both long-term and short-term behaviors at the same time and allow the interaction between these two plasticity mechanisms. Therefore synapses in a neuron might be activated due to short-term activities or long-term activities or both. By identifying how many synapses have been activated and due to which plasticity mechanism they have been activated give more information than simply saying which neuron is activated. As a result of these computationally potential synapses, a given neuron can also be activated by both of these plasticity mechanisms. Therefore, identification of the pattern of activation of neurons due to long-term plasticity and pattern of activation of neurons due to short-term plasticity gives two interpretations to the same input in two different time scales. That is, short-term activation of neurons can be considered as the formation of the short-term memory while long-term activation of neurons is the generalization of the inputs. Higher cognitive representation of the input can be considered as the pattern of activation of neurons in lower- and higher area of the hidden layer.