Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Learning linear, sparse, factorial codes
Learning linear, sparse, factorial codes
Towards novel neuroscience-inspired computing
Emergent neural computational architectures based on neuroscience
Hi-index | 0.00 |
We investigate how structured information processing within a neural net can emerge as a result of unsupervised learning from data. The model consists of input neurons and hidden neurons which are recurrently connected. On the basis of a maximum likelihood framework the task is to reconstruct given input data using the code of the hidden units. Hidden neurons are fully connected and they may code on different hierarchical levels. The hidden neurons are separated into two groups by their intrinsic parameters which control their firing properties. These differential properties encourage the two groups to code on two different hierarchical levels. We train the net using data which are either generated by two linear models acting in parallel or by a hierarchical process. As a result of training the net captures the structure of the data generation process. Simulations were performed with two different neural network models, both trained to be maximum likelihood predictors of the training data. A (non-linear) hierarchical Kalman filter model and a Helmholtz machine. Here we compare both models to the neural circuitry in the cortex. The results imply that the division of the cortex into laterally and hierarchically organized areas can evolve to a certain degree as an adaptation to the environment.