Introduction to the theory of neural computation
Introduction to the theory of neural computation
A novel competitive learning algorithm for the parametric classification with Gaussian distributions
Pattern Recognition Letters
Self-Organizing Maps
High Speed k-Winner-Take-ALL Competitive Learning in Reconfigurable Hardware
IEA/AIE '09 Proceedings of the 22nd International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems: Next-Generation Applied Intelligence
FPGA implementation of kNN classifier based on wavelet transform and partial distance search
SCIA'07 Proceedings of the 15th Scandinavian conference on Image analysis
A new VLSI architecture for full-search vector quantization
IEEE Transactions on Circuits and Systems for Video Technology
Self-organizing maps, vector quantization, and mixture modeling
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents a novel pipelined architecture for fast competitive learning (CL). It is used as a hardware accelerator in a system on programmable chip (SOPC) for reducing the computational time. In the architecture, a novel codeword swapping scheme is adopted so that both neuron competition processes for different training vectors can be operated concurrently. The neuron updating process is based on a hardware divider with simple table lookup operations. The divider performs finite precision calculation for area cost reduction at the expense of slight degradation in training performance. Experimental results show that the CPU time is lower than that of other hardware or software implementations running the CL training program with or without the support of custom hardware.