Analog VLSI and neural systems
Analog VLSI and neural systems
Compartmental models of complex neurons
Methods in neuronal modeling
The modelling of pyramidal neurones in the visual cortex
The computing neuron
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
A biologically supported error-correcting learning rule
Neural Computation
Canonical neurons and their computational organization
Single neuron computation
Multiplying with synapses and neurons
Single neuron computation
Recent VLSI neural networks in Japan
Journal of VLSI Signal Processing Systems - Special issue on VLSI neural networks
Recent developments of electronic neural nets in North America
Journal of VLSI Signal Processing Systems - Special issue on VLSI neural networks
Implementation and performance of an analog nonvolatile neural network
Analog Integrated Circuits and Signal Processing
VLSI analogs of neuronal visual processing: a synthesis of form and function
VLSI analogs of neuronal visual processing: a synthesis of form and function
An analog memory circuit for spiking silicon neurons
Neural Computation
Analogue Neural VLSI: A Pulse Stream Approach
Analogue Neural VLSI: A Pulse Stream Approach
Silicon Implementation of Pulse Coded Neural Networks
Silicon Implementation of Pulse Coded Neural Networks
Switched-capacitor neuromorphs with wide-range variable dynamics
IEEE Transactions on Neural Networks
Antidromic Spikes Drive Hebbian Learning in an Artificial Dendritic Tree
Analog Integrated Circuits and Signal Processing - Special issue on Learning on Silicon
Hi-index | 0.00 |
We describe neuromorphic, variable-weight synapses onartificial dendrites that facilitate experimentation with correlativeadaptation rules. Attention is focused on those aspects of biologicalsynaptic function that could affect a neuromorphic network‘scomputational power and adaptive capability. These include sublinearsummation, quantal synaptic noise, and independent adaptationof adjacent synapses. The diffusive nature of artificial dendritesadds considerable flexibility to the design of fast synapsesby allowing conductances to be enabled for short or for variablelengths of time. We present two complementary synapse designs:the shared conductance array and the self-timed synapse. Bothsynapse circuits behave as conductances to mimic biological synapsesand thus enable sublinear summation. The former achieves weightvariation by selecting different conductances from an on-chiparray, and the latter by modulating the length of time a constantconductance remains activated. Both work with our interchip communicationsystem, virtual wires. For the present purpose of comparing variousadaptation mechanisms in software, synaptic weights are storedoff chip. This simplifies the addition of quantal weight noiseand allows connections from different sources to the same dendriticcompartment to have independent weights.