Training products of experts by minimizing contrastive divergence
Neural Computation
A fast learning algorithm for deep belief nets
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.01 |
Many computational models have been proposed for interpreting the properties of neurons in the primary visual cortex (V1). But relatively fewer models have been proposed for interpreting the properties of neurons beyond V1. Recently, it was found that the sparse deep belief network (DBN) could reproduce some properties of the secondary visual cortex (V2) neurons when trained on natural images. In this paper, by investigating the key factors that contribute to the success of the sparse DBN, we propose a hierarchical model based on a simple algorithm, K-means, which can be realized by competitive Hebbian learning. The resulting model exhibits some response properties of V2 neurons, and it is more biologically feasible and computationally efficient than the sparse DBN.