1994 Special Issue: Modeling visual recognition from neurobiological constraints
Neural Networks - Special issue: models of neurodynamics and behavior
Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence
Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence
Computation of pattern invariance in brain-like structures
Neural Networks - Special issue on organisation of computation in brain-like systems
Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2
A Mixed-Mode Analog Neural Network Using Current-Steering Synapses
Analog Integrated Circuits and Signal Processing
Face recognition: a convolutional neural-network approach
IEEE Transactions on Neural Networks
Evaluation of convolutional neural networks for visual recognition
IEEE Transactions on Neural Networks
Automatic abstraction and fault tolerance in cortical microachitectures
Proceedings of the 38th annual international symposium on Computer architecture
A defect-tolerant accelerator for emerging high-performance applications
Proceedings of the 39th Annual International Symposium on Computer Architecture
Hi-index | 0.00 |
Recently, the authors described a training method for a convolutional neural network of threshold neurons. Hidden layers are trained by by clustering, in a feed-forward manner, while the output layer is trained using the supervised Perceptron rule. The system is designed for implementation on an existing low-power analog hardware architecture, exhibiting inherent error sources affecting the computation accuracy in unspecified ways. One key technique is to train the network on-chip, taking possible errors into account without any need to quantify them. For the hidden layers, an on-chip approach has been applied previously. In the present work, a chip-in-the-loop version of the iterative Perceptron rule is introduced for training the output layer. Influences of various types of errors are thoroughly investigated (noisy, deleted, and clamped weights) for all network layers, using the MNIST database of hand-written digits as a benchmark.