Vector quantization and signal compression
Vector quantization and signal compression
Bus-invert coding for low-power I/O
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Applications of neural networks to digital communications: a survey
Signal Processing - Special issue on emerging techniques for communication terminals
Self-Organizing Maps
On the Surprising Behavior of Distance Metrics in High Dimensional Spaces
ICDT '01 Proceedings of the 8th International Conference on Database Theory
2005 Special issue: FPGA implementation of self organizing map with digital phase locked loops
Neural Networks - 2005 Special issue: IJCNN 2005
Microprocessors & Microsystems
A novel hardware-oriented Kohonen SOM image compression algorithm and its FPGA implementation
Journal of Systems Architecture: the EUROMICRO Journal
A Mixed HW-SW System for Fast Codebook Generation with the LBG Algorithm
ENICS '08 Proceedings of the 2008 International Conference on Advances in Electronics and Micro-electronics
IP core implementation of a self-organizing neural network
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Hardware implementations of the VQ (vector quantization) and SOM (self organizing map) permit the deployment of these computationally intensive algorithms as single chips or IP cores. This paper discusses the design of an IP core based on an SIMD (single instruction multiple data) processor array for such an implementation with emphasis on those aspects of the design which lead to a low power implementation. Power reduction techniques described are: local memory sharing between processors; processor instruction set and datapath organization; implementation of the winner take all calculation; and use of a thresholding algorithm to permit power down of processors during the distance calculation. It is shown that with a typical 0.13@mm low power semiconductor process and with a clock speed of 100MHz the power dissipation per processor is approximately 1mW without use of thresholding. Including thresholding reduces this power to less than 0.5mW per processor. Area for a 256 processor array with 256 8-bit vector elements per processor is 3.5mm x2.5mm.