Introduction to the theory of neural computation
Introduction to the theory of neural computation
Elements of information theory
Elements of information theory
Toward a theory of the striate cortex
Neural Computation
The multiinformation function as a tool for measuring stachastic dependence
Learning in graphical models
Spikes: exploring the neural code
Spikes: exploring the neural code
Neural codes and distributed representations: foundations of neural computation
Neural codes and distributed representations: foundations of neural computation
Connectivity and complexity: the relationship between neuroanatomy and brain dynamics
Neural Networks - Special issue on the global brain: imaging and modelling
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
Information-geometric measure for neural spikes
Neural Computation
Locality of global stochastic interaction in directed acyclic networks
Neural Computation
Dynamical properties of strongly interacting Markov chains
Neural Networks
Information geometry on hierarchy of probability distributions
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We extend Linkser's Infomax principle for feedforward neural networks to a measure for stochastic interdependence that captures spatial and temporal signal properties in recurrent systems. This measure, stochastic interaction, quantifies the Kullback-Leibler divergence of a Markov chain from a product of split chains for the single unit processes. For unconstrained Markov chains, the maximization of stochastic interaction, also called Temporal Infomax, has been previously shown to result in almost deterministic dynamics. This letter considers Temporal Infomax on constrained Markov chains, where some of the units are clamped to prescribed stochastic processes providing input to the system. Temporal Infomax in that case leads to finite state automata, either completely deterministic or weakly nondeterministic. Transitions between internal states of these systems are almost perfectly predictable given the complete current state and the input, but the activity of each single unit alone is virtually random. The results are demonstrated by means of computer simulations and confirmed analytically. It is furthermore shown numerically that Temporal Infomax leads to a high information flow from the input to internal units and that a simple temporal learning rule can approximately achieve the optimization of temporal interaction. We relate these results to experimental data concerning the correlation dynamics and functional connectivities observed in multiple electrode recordings.