Learning and relearning in Boltzmann machines
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Training products of experts by minimizing contrastive divergence
Neural Computation
Kerneltron: support vector "machine" in silicon
IEEE Transactions on Neural Networks
Continuous-valued probabilistic behavior in a VLSI generative model
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Implementing probabilistic models in Very-Large-Scale-Integration (VLSI) has been attractive to implantable biomedical devices for improving sensor fusion. However, hardware non-idealities can introduce training errors, hindering optimal modelling through on-chip adaptation. This paper investigates the feasibility of using the dynamic current mirrors to implement a simple and precise training circuit. The precision required for training the Continuous Restricted Boltzmann Machine (CRBM) is first identified. A training circuit based on accumulators formed by dynamic current mirrors is then proposed. By measuring the accumulators in VLSI, the feasibility of training the CRBM on chip according to its minimizing-contrastive-divergence rule is concluded.