The nature of statistical learning theory
The nature of statistical learning theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
IEEE Transactions on Circuits and Systems Part I: Regular Papers
Comparing support vector machines with Gaussian kernels to radialbasis function classifiers
IEEE Transactions on Signal Processing
A one-layer recurrent neural network for support vector machine learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Improved neural network for SVM learning
IEEE Transactions on Neural Networks
A digital architecture for support vector machines: theory, algorithm, and FPGA implementation
IEEE Transactions on Neural Networks
Analog soft-pattern-matching classifier using floating-gate MOS technology
IEEE Transactions on Neural Networks
Kerneltron: support vector "machine" in silicon
IEEE Transactions on Neural Networks
Analog neural network for support vector machine learning
IEEE Transactions on Neural Networks
Real-Time on-line-learning support vector machine based on a fully-parallel analog VLSI processor
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part II
An analog on-line-learning K-means processor employing fully parallel self-converging circuitry
Analog Integrated Circuits and Signal Processing
Hi-index | 0.00 |
An analog circuit architecture of Gaussian-kernel support vector machines having on-chip training capability has been developed. It has a scalable array processor configuration and the circuit size increases only in proportion to the number of learning samples. Thanks to the hardware-friendly algorithm employed in the present work, the learning function is realized by attaching a small additional circuitry to the SVM classifying hardware. The SVM classifying hardware is composed as an array of Gaussian circuits. Although the system is inherently analog, the input and output signals including training results are all available in digital format. Therefore, the learned parameters are easily stored and reused after training sessions. A proof-of concept chip containing 2-class, 2-D, 12-template classifier was designed and fabricated in a 0.18-µm CMOS technology. The experimental results obtained from the fabricated chips are presented and compared with theoretical calculation results. It can classify 8.7 × 105 vectors per second and the average power dissipation was 220 µW. The learning capability was tested using eight fabricated chips and the variability among these chips were evaluated. Successful operation of the chips was confirmed by measurement results, which demonstrates that on-chip-learning can compensate for analog imperfections.