Neural Computation
Emergence of modularity within one sheet of neurons: a model comparison
Emergent neural computational architectures based on neuroscience
Overcomplete ICA with a Geometric Algorithm
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Emergence of Modularity within One Sheet of Neurons: A Model Comparison
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
An application of formal concept analysis to semantic neural decoding
Annals of Mathematics and Artificial Intelligence
Corpus-based neural network method for explaining unknown words by wordnet senses
PKDD'05 Proceedings of the 9th European conference on Principles and Practice of Knowledge Discovery in Databases
Hi-index | 0.00 |
In previous work (Olshausen \& Field 1996), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., wavelet-like). This note shows how the algorithm may be interpreted within a maximum-likelihood framework. Several useful insights emerge from this connection: it makes explicit the relation to statistical independence (i.e., factorial coding), it shows a formal relationship to the algorithm of Bell and Sejnowski (1995), and it suggests how to adapt parameters that were previously fixed.