Learning translation invariant recognition in massively parallel networks
Volume I: Parallel architectures on PARLE: Parallel Architectures and Languages Europe
Learning invariance from transformation sequences
Neural Computation
Sparse coding in the primate cortex
The handbook of brain theory and neural networks
Learning Lie groups for invariant visual perception
Proceedings of the 1998 conference on Advances in neural information processing systems II
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Separating Style and Content with Bilinear Models
Neural Computation
Learning Overcomplete Representations
Neural Computation
How Close Are We to Understanding V1?
Neural Computation
Images, Frames, and Connectionist Hierarchies
Neural Computation
A Maximum-Likelihood Interpretation for Slow Feature Analysis
Neural Computation
A self-organizing map of sigma-pi units
Neurocomputing
Learning the Lie Groups of Visual Invariance
Neural Computation
Head Pose Estimation Based on Tensor Factorization
Neural Information Processing
Bilinear Models for Spatio-Temporal Point Distribution Analysis
International Journal of Computer Vision
Non-negative sparse modeling of textures
SSVM'07 Proceedings of the 1st international conference on Scale space and variational methods in computer vision
Perception of transformation-invariance in the visual pathway
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
A multifactor winner-take-all dynamics
Neural Computation
Self-organization of topographic bilinear networks for invariant recognition
Neural Computation
ICIAR'07 Proceedings of the 4th international conference on Image Analysis and Recognition
Learning temporal coherent features through life-time sparsity
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
Hi-index | 0.00 |
Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from natural images. However, these approaches do not take image transformations into account. We describe an unsupervised algorithm for learning both localized features and their transformations directly from images using a sparse bilinear generative model. We show that from an arbitrary set of natural images, the algorithm produces oriented basis filters that can simultaneously represent features in an image and their transformations. The learned generative model can be used to translate features to different locations, thereby reducing the need to learn the same feature at multiple locations, a limitation of previous approaches to sparse coding and ICA. Our results suggest that by explicitly modeling the interaction between local image features and their transformations, the sparse bilinear approach can provide a basis for achieving transformation-invariant vision.