Dimensionality-Reduction Using Connectionist Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Stochastic Complexity in Statistical Inquiry Theory
Stochastic Complexity in Statistical Inquiry Theory
A minimum description length framework for unsupervised learning
A minimum description length framework for unsupervised learning
Arbitrary elastic topologies and ocular dominance
Neural Computation
Probabilistic interpretation of population codes
Neural Computation
Recurrent sampling models for the Helmholtz machine
Neural Computation
Population computation of vectorial transformations
Neural Computation
Representational accuracy of stochastic neural populations
Neural Computation
What kind of a graphical model is the brain?
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Hi-index | 0.00 |
The minimum description length (MDL) principle can be used to train the hidden units of a neural network to extract a representation that is cheap to describe but nonetheless allows the input to be reconstructed accurately. We show how MDL can be used to develop highly redundant population codes. Each hidden unit has a location in a low-dimensional implicit space. If the hidden unit activities form a bump of a standard shape in this space, they can be cheaply encoded by the center of this bump. So the weights from the input units to the hidden units in an autoencoder are trained to make the activities form a standard bump. The coordinates of the hidden units in the implicit space are also learned, thus allowing flexibility, as the network develops a discontinuous topography when presented with different input classes.