Simulation of chaotic EEG patterns with a dynamic model of the olfactory system
Biological Cybernetics
Short term memory in recurrent networks of spiking neurons
Natural Computing: an international journal
Attractor Landscapes and Active Tracking: The Neurodynamics of Embodied Action
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Hi-index | 0.00 |
Contrary to Hopfield-like networks, random recurrent neural networks (RRNN), where the couplings are random, exhibit complex dynamics (limit cycles, chaos). It is possible to store information in these networks through Hebbian learning. Eventually, learning "destroys" the dynamics and leads to a fixed point attractor. We investigate here the structural changes occurring in the network through learning. We show that a simple Hebbian learning rule organizes synaptic weight redistribution on the network from an initial homogeneous and random distribution to a heterogeneous one, where strong synaptic weights preferentially assemble in triangles. Hence learning organizes the network of the large synaptic weights as a "small-world" one