Vector quantization and signal compression
Vector quantization and signal compression
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Self-Organizing Maps
Pattern Recognition, Third Edition
Pattern Recognition, Third Edition
A new VLSI architecture for full-search vector quantization
IEEE Transactions on Circuits and Systems for Video Technology
A new k-winners-take-all neural network and its array architecture
IEEE Transactions on Neural Networks
Efficient pipelined architecture for competitive learning
Journal of Parallel and Distributed Computing
An efficient pipelined architecture for fast competitive learning
ICA3PP'10 Proceedings of the 10th international conference on Algorithms and Architectures for Parallel Processing - Volume Part II
Hi-index | 0.00 |
A novel hardware architecture of the competitive learning (CL) algorithm with k -winners-take-all activation is presented in this paper. It is used as a custom logic block in the arithmetic logic unit (ALU) of the softcore NIOS processor for CL training. Both the partial distance search (PDS) module and hardware divider adopt finite precision calculation for area cost reduction at the expense of slight degradation in training performance. The PDS module also employs subspace search and multiple-coefficient accumulation techniques for effective reduction of the computation latency for the PDS search. Experiment results show that the CPU time is lower than that of Pentium IV processors running the CL training program without the support of custom hardware.