Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Connectionist learning of belief networks
Artificial Intelligence
Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
A multiple cause mixture model for unsupervised learning
Neural Computation
Hierarchical non-linear factor analysis and topographic maps
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Neural Computation
A Database for Handwritten Text Recognition Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Belief Propagation and Revision in Networks with Loops
Belief Propagation and Revision in Networks with Loops
Mean field theory for sigmoid belief networks
Journal of Artificial Intelligence Research
Optimal binary index assignments for a class of equiprobable scalar and vector quantizers
IEEE Transactions on Information Theory - Part 2
The Hadamard transform-a tool for index assignment
IEEE Transactions on Information Theory
Binary lattice vector quantization with linear block codes and affine index assignments
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We propose a vector quantisation method which does not only provide a compact description of data vectors in terms codebook vectors, but also gives an explanation of codebook vectors as binary combinations of elementary features. This corresponds to the intuitive notion that, in the real world, patterns can be usefully thought of as being constructed by compositions from simpler features. The model can be understood as a generative model, in which the codebook vector is generated by a hidden binary state vector. The model is non-probabilistic in the sense that it assigns each data vector to a single codebook vector. We describe exact and approximate learning algorithms for learning deterministic feature representations. In contrast to probabilistic models, the deterministic approach allows the use of message propagation algorithms within the learning scheme. These are compared with standard mean-field/Gibbs sampling learning. We show that Generative Vector Quantisation gives a good performance in large scale real world tasks like image compression and handwritten digit analysis with up to 400 data dimensions.