The iSLIP scheduling algorithm for input-queued switches
IEEE/ACM Transactions on Networking (TON)
New methods to color the vertices of a graph
Communications of the ACM
Computer Networks
Spiking Neuron Models: An Introduction
Spiking Neuron Models: An Introduction
QoS-Sensitive Flows: Issues in IP Packet Handling
IEEE Internet Computing
Tiny Tera: A Packet Switch Core
IEEE Micro
ATM Input-Buffered Switches with the Guaranteed-Rate Property
ISCC '98 Proceedings of the Third IEEE Symposium on Computers & Communications
A Mixed-Mode Analog Neural Network Using Current-Steering Synapses
Analog Integrated Circuits and Signal Processing
Interconnecting VLSI spiking neural networks using isochronous connections
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Implementation issues of neuro-fuzzy hardware: going toward HW/SW codesign
IEEE Transactions on Neural Networks
ICES'10 Proceedings of the 9th international conference on Evolvable systems: from biology to hardware
Adaptive routing strategies for large scale spiking neural network hardware implementations
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
A communication infrastructure for emulating large-scale neural networks models
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Hi-index | 0.00 |
This paper presents a network architecture to interconnect VLSI1 neural network chips to build a distributed ANN2 system. The architecture combines techniques from circuit switching and packet switching to provide two different service classes: isochronous connections and best-effort packet transfers. The isochronous connections are able to transport the axonal data of artificial neurons between VLSI ANN models that feature a speedup of multiples orders of magnitudes compared to biology. The connections use reserved bandwidth to provide loss-less transmissions as well as a low end-to-end delay with bounded jitter. Best-effort packet transfers use the remaining bandwidth for on-demand multi-purpose communication. The data forwarding is performed between synchronized instances of a dedicated switch architecture used at each network node. The switch is scalable in terms of port numbers and line speed. Its low complexity allows for an implementation within programmable logic or directly within a VLSI neural network chip. A reference implementation of the proposed network architecture is presented within an existing framework that hosts VLSI neural network chips operating at speedups of 104 to 105. The network architecture is further not limited to VLSI neural networks, but it can in principle be used in all network environments that require isochronous connections as well as packet processing.