Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Vector quantization and signal compression
Vector quantization and signal compression
Vector quantization for volume rendering
VVS '92 Proceedings of the 1992 workshop on Volume visualization
A data distributed, parallel algorithm for ray-traced volume rendering
PRS '93 Proceedings of the 1993 symposium on Parallel rendering
POWER2: next generation of the RISC System/6000 family
IBM Journal of Research and Development
IBM Systems Journal
PRS '97 Proceedings of the IEEE symposium on Parallel rendering
Parallel codebook design for vector quantization on a message passing MIMD architecture
Parallel Computing - Parallel computing in image and video processing
Volume Rendering of DCT-Based Compressed 3D Scalar Data
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Image Processing
Minimax partial distortion competitive learning for optimal codebook design
IEEE Transactions on Image Processing
Image compression by self-organized Kohonen map
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Vector quantization (VQ) is an attractive technique for lossy data compression, which has been a key technology for data storage and/or transfer. So far, various competitive learning (CL) algorithms have been proposed to design optimal codebooks presenting quantization with minimized errors. Although algorithmic improvements of these CL algorithms have achieved faster codebook design than conventional ones, limitations of speedup still exist when large data sets are processed on a single processor. Considering a variety of CL algorithms, parallel processing on flexible computing environment, like general-purpose parallel computers is in demand for a large-scale codebook design. This paper presents a formulation for efficiently parallelizing CL algorithms, suitable for distributed-memory parallel computers with a message-passing mechanism. Based on this formulation, we parallelize three CL algorithms: the Kohonen learning algorithm, the MMPDCL algorithm and the LOJ algorithm. Experimental results indicate a high scalability of the parallel algorithms on three different types of commercially available parallel computers: IBM SP2, NEC AzusA and PC cluster.