A VLSI interconnect strategy for biologically inspired artificial neural networks
A VLSI interconnect strategy for biologically inspired artificial neural networks
Tighter Layouts of the Cube-Connected Cycles
IEEE Transactions on Parallel and Distributed Systems
The cube-connected cycles: a versatile network for parallel computation
Communications of the ACM
A Complementary Pair of Four-Terminal Silicon Synapses
Analog Integrated Circuits and Signal Processing
A SIMD/Dataflow Architecture for a Neurocomputer for Spike-Processing Neural Networks (NESPINN)
MICRONEURO '96 Proceedings of the 5th International Conference on Microelectronics for Neural Networks and Fuzzy Systems
Digitally Assisted Analog Circuits
IEEE Micro
Tomorrow's analog: just dead or just different?
Proceedings of the 43rd annual Design Automation Conference
Towards cortex sized artificial neural systems
Neural Networks
ISMVL '07 Proceedings of the 37th International Symposium on Multiple-Valued Logic
Communication Structures for Large Networks of Microcomputers
IEEE Transactions on Computers
Anatomy of a cortical simulator
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
On brain-inspired connectivity and hybrid network topologies
NANOARCH '08 Proceedings of the 2008 IEEE International Symposium on Nanoscale Architectures
Prospects for the development of digital CMOL circuits
NANOARCH '07 Proceedings of the 2007 IEEE International Symposium on Nanoscale Architectures
Emerging non-CMOS nanoelectronic devices - What are they?
NEMS '09 Proceedings of the 2009 4th IEEE International Conference on Nano/Micro Engineered and Molecular Systems
Hardware architectures and implementations for associative memories---the building blocks of hierarchically distributed memories
IEEE Transactions on Nanotechnology
NeuroPipe-Chip: A digital neuro-processor for spiking neural networks
IEEE Transactions on Neural Networks
Synaptic plasticity in spiking neural networks (SP2INN): a system approach
IEEE Transactions on Neural Networks
Which model to use for cortical spiking neurons?
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Scaling-efficient in-situ training of CMOL CrossNet classifiers
Neural Networks
Hi-index | 0.00 |
In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering.