Synchrony in excitatory neural networks
Neural Computation
Dynamic stochastic synapses as computational units
Neural Computation
Synchrony and desynchrony in integrate-and-fire oscillators
Neural Computation
Patterns of Synchrony in Neural Networks with Spike Adaptation
Neural Computation
On Synchrony of Weakly Coupled Neurons at Low Firing Rate
Neural Computation
Dynamics of Strongly Coupled Spiking Neurons
Neural Computation
2007 Special Issue: Consciousness & the small network argument
Neural Networks
How precise is neuronal synchronization?
Neural Computation
Simulating biological-inspired spiking neural networks with OpenCL
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
A Hebbian-based reinforcement learning framework for spike-timing-dependent synapses
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
A hypothetical free synaptic energy function and related states of synchrony
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
On the capacity of transient internal states in liquid-state machines
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Temporal finite-state machines: a novel framework for the general class of dynamic networks
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.00 |
In this study, the generation of temporal synchrony within an artificial neural network is examined considering a stochastic synaptic model. A network is introduced and driven by Poisson distributed trains of spikes along with white-Gaussian noise that is added to the internal synaptic activity representing the background activity (neuronal noise). A Hebbian-based learning rule for the update of synaptic parameters is introduced. Only arbitrarily selected synapses are allowed to learn, i.e. change parameter values. The average of the cross-correlation coefficients between a smoothed version of the responses of all the neurons is taken as an indicator for synchrony. Results show that a network using such a framework is able to achieve different states of synchrony via learning. Thus, the plausibility of using stochastic-based models in modeling the neural process is supported. It is also consistent with arguments claiming that synchrony is a part of the memory-recall process and copes with the accepted framework in biological neural systems.