Vector quantization and signal compression
Vector quantization and signal compression
Simplifying neural networks by soft weight-sharing
Neural Computation
Soft vector quantization and the EM algorithm
Neural Networks
Worst case analysis of weight inaccuracy effects in multilayer perceptrons
IEEE Transactions on Neural Networks
The effects of quantization on multilayer neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We propose a novel approach for quantizing the weights of a multi-layer perceptron (MLP) for efficient VLSI implementation. Our approach uses soft weight sharing, previously proposed for improved generalization and considers the weights not as constant numbers but as random variables drawn from a Gaussian mixture distribution; which includes as its special cases k-means clustering and uniform quantization. This approach couples the training of weights for reduced error with their quantization. Simulations on synthetic and real regression and classification data sets compare various quantization schemes and demonstrate the advantage of the coupled training of distribution parameters.