Elements of information theory
Elements of information theory
Natural gradient works efficiently in learning
Neural Computation
Spikes: exploring the neural code
Spikes: exploring the neural code
Exploring randomness
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
Information geometry on hierarchy of probability distributions
IEEE Transactions on Information Theory
Dynamical properties of strongly interacting Markov chains
Neural Networks
Stochastic interaction in associative nets
Neurocomputing
Hi-index | 0.00 |
The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the periphery of the network. Our main intrinsic constraint is given by a directed acyclic network structure. First mathematical results about the strong relation of the local information flow and the global interaction are stated in order to investigate the possibility of controlling IMI optimization in a completely local way. Furthermore, we discuss some relations of this approach to the optimization according to Linsker's Infomax principle.