The NEURON simulation environment
Neural Computation
NeuSim: A Modular Neural Networks Simulator for Beowulf Clusters
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Bio-inspired Applications of Connectionism-Part II
A distributed and multithreaded neural event driven simulation framework
PDCN'06 Proceedings of the 24th IASTED international conference on Parallel and distributed computing and networks
A GALS Infrastructure for a Massively Parallel Multiprocessor
IEEE Design & Test
The Deferred Event Model for Hardware-Oriented Spiking Neural Networks
Advances in Neuro-Information Processing
Implementing Learning on the SpiNNaker Universal Neural Chip Multiprocessor
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part I
Optimal connectivity in hardware-targetted MLP networks
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A universal abstract-time platform for real-time neural networks
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Implementation of artificial neural networks on a reconfigurable hardware accelerator
EUROMICRO-PDP'02 Proceedings of the 10th Euromicro conference on Parallel, distributed and network-based processing
A novel approach for the implementation of large scale spiking neural networks on FPGA hardware
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Synaptic plasticity in spiking neural networks (SP2INN): a system approach
IEEE Transactions on Neural Networks
Simple model of spiking neurons
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A general-purpose model translation system for a universal neural chip
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Concurrent hybrid switching for massively parallel systems-on-chip: the CYBER architecture
Proceedings of the 9th conference on Computing Frontiers
WCCI'12 Proceedings of the 2012 World Congress conference on Advances in Computational Intelligence
Hi-index | 0.00 |
Neural networks present a fundamentally different model of computation from the conventional sequential digital model. Modelling large networks on conventional hardware thus tends to be inefficient if not impossible. Neither dedicated neural chips, with model limitations, nor FPGA implementations, with scalability limitations, offer a satisfactory solution even though they have improved simulation performance dramatically. SpiNNaker introduces a different approach, the "neuromimetic" architecture, that maintains the neural optimisation of dedicated chips while offering FPGA-like universal configurability. Central to this parallel multiprocessor is an asynchronous event-driven model that uses interrupt-generating dedicated hardware on the chip to support real-time neural simulation. While this architecture is particularly suitable for spiking models, it can also implement "classical" neural models like the MLP efficiently. Nonetheless, event handling, particularly servicing incoming packets, requires careful and innovative design in order to avoid local processor congestion and possible deadlock. Using two exemplar models, a spiking network using Izhikevich neurons, and an MLP network, we illustrate how to implement efficient service routines to handle input events. These routines form the beginnings of a library of "drop-in" neural components. Ultimately, the goal is the creation of a library-based development system that allows the modeller to describe a model in a high-level neural description environment of his choice and use an automated tool chain to create the appropriate SpiNNaker instantiation. The complete system: universal hardware, automated tool chain, embedded system management, represents the "ideal" neural modelling environment: a general-purpose platform that can generate an arbitrary neural network and run it with hardware speed and scale.