Multithreaded programming with Pthreads
Multithreaded programming with Pthreads
Graph theory and its applications
Graph theory and its applications
Time, clocks, and the ordering of events in a distributed system
Communications of the ACM
The Art of Computer Programming Volumes 1-3 Boxed Set
The Art of Computer Programming Volumes 1-3 Boxed Set
Theory of Modeling and Simulation
Theory of Modeling and Simulation
MPI: A Message-Passing Interface Standard
MPI: A Message-Passing Interface Standard
Spike-Timing-Dependent Plasticity in Balanced Random Networks
Neural Computation
Silicon auditory processors as computer peripherals
IEEE Transactions on Neural Networks
Simplicity and efficiency of integrate-and-fire neuron models
Neural Computation
Efficient simulation of large-scale spiking neural networks using CUDA graphics processors
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
insilicoSim: an extendable engine for parallel heterogeneous biophysical simulations
Proceedings of the 3rd International ICST Conference on Simulation Tools and Techniques
A general-purpose model translation system for a universal neural chip
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
A hierachical configuration system for a massively parallel neural hardware platform
Proceedings of the 9th conference on Computing Frontiers
Artificial Neural Network Simulation on CUDA
DS-RT '12 Proceedings of the 2012 IEEE/ACM 16th International Symposium on Distributed Simulation and Real Time Applications
Hi-index | 0.00 |
To understand the principles of information processing in the brain, we depend on models with more than 105 neurons and 109 connections. These networks can be described as graphs of threshold elements that exchange point events over their connections. From the computer science perspective, the key challenges are to represent the connections succinctly; to transmit events and update neuron states efficiently; and to provide a comfortable user interface. We present here the neural simulation tool NEST, a neuronal network simulator which addresses all these requirements. To simulate very large networks with acceptable time and memory requirements, NEST uses a hybrid strategy, combining distributed simulation across cluster nodes (MPI) with thread-based simulation on each computer. Benchmark simulations of a computationally hard biological neuronal network model demonstrate that hybrid parallelization yields significant performance benefits on clusters of multi-core computers, compared to purely MPIbased distributed simulation.