Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
The nature of statistical learning theory
The nature of statistical learning theory
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Training Invariant Support Vector Machines
Machine Learning
How Close Are We to Understanding V1?
Neural Computation
A canonical neural circuit for cortical nonlinear operations
Neural Computation
Learning methods for generic object recognition with invariance to pose and lighting
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Learning temporal coherent features through life-time sparsity
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
Hi-index | 0.00 |
Simulations of cortical computation have often focused on networks built from simplified neuron models similar to rate models hypothesized for V1 simple cells. However, physiological research has revealed that even V1 simple cells have surprising complexity. Our computational simulations explore the effect of this complexity on the visual system's ability to solve simple tasks, such as the categorization of shapes and digits, after learning from a limited number of examples. We use recently proposed high-throughput methodology to explore what axes of modeling complexity are useful in these categorization tasks. We find that complex cell rate models learn to categorize objects better than simple cell models, and without incurring extra computational expense. We find that the squaring of linear filter responses leads to better performance. We find that several other components of physiologically derived models do not yield better performance.