A reconfigurable architecture for hybrid CMOS/Nanodevice circuits
Proceedings of the 2006 ACM/SIGDA 14th international symposium on Field programmable gate arrays
Hybrid CMOS/nanoelectronic digital circuits: devices, architectures, and design automation
ICCAD '05 Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design
CMOL: Second life for silicon?
Microelectronics Journal
An Extremely Simple Reinforcement Learning Rule for Neural Networks
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Design and defect tolerance beyond CMOS
CODES+ISSS '08 Proceedings of the 6th IEEE/ACM/IFIP international conference on Hardware/Software codesign and system synthesis
Compact floating-gate learning array with STDP
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Scaling-efficient in-situ training of CMOL CrossNet classifiers
Neural Networks
CMOL crossnets as pattern classifiers
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Fault-tolerant programmable logic array for nanoelectronics
International Journal of Circuit Theory and Applications
Ultra low power associative computing with spin neurons and resistive crossbar memory
Proceedings of the 50th Annual Design Automation Conference
Hi-index | 0.00 |
This paper reviews recent important results in the development of neuromorphic network architectures (‘CrossNets’) for future hybrid semiconductor/nanodevice-integrated circuits. In particular, we have shown that despite the hardware-imposed limitations, a simple weight import procedure allows the CrossNets using simple two-terminal nanodevices to perform functions (such as image recognition and pattern classification) that had been earlier demonstrated in neural networks with continuous, deterministic synaptic weights. Moreover, CrossNets can also be trained to work as classifiers by the faster error-backpropagation method, despite the absence of a layered structure typical for the usual neural networks. Finally, one more method, ‘global reinforcement’, may be suitable for training CrossNets to perform not only the pattern classification, but also more intellectual tasks. A demonstration of such training would open a way towards artificial cerebral-cortex-scale networks capable of advanced information processing (and possibly self-development) at a speed several orders of magnitude higher than that of their biological prototypes. Copyright © 2004 John Wiley & Sons, Ltd.