A massively parallel architecture for a self-organizing neural pattern recognition machine
Computer Vision, Graphics, and Image Processing
Learning factorial codes by predictability minimization
Neural Computation
What is the goal of sensory coding?
Neural Computation
Sparse Approximate Solutions to Linear Systems
SIAM Journal on Computing
Feature extraction through LOCOCODE
Neural Computation
Atomic Decomposition by Basis Pursuit
SIAM Review
The Nonlinear Statistics of High-Contrast Patches in Natural Images
International Journal of Computer Vision - Special Issue on Computational Vision at Brown University
Convex Optimization
Proceedings of the 34th conference on Winter simulation: exploring new frontiers
The equation for response to selection and its use for prediction
Evolutionary Computation
Sparse coding via thresholding and local competition in neural circuits
Neural Computation
Letters: Spike-based cross-entropy method for reconstruction
Neurocomputing
Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Foundations of Computational Mathematics
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Learning to play using low-complexity rule-based policies: illustrations through Ms. Pac-Man
Journal of Artificial Intelligence Research
Subspace pursuit for compressive sensing signal reconstruction
IEEE Transactions on Information Theory
Cross-Entropy optimization for independent process analysis
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
Matching pursuits with time-frequency dictionaries
IEEE Transactions on Signal Processing
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Hi-index | 0.01 |
Sparse coding algorithms find a linear basis in which signals can be represented by a small number of non-zero coefficients. Such coding may play an important role in neural information processing and metabolically efficient natural solutions serve as an inspiration for algorithms employed in various areas of computer science. In particular, finding non-zero coefficients in overcomplete sparse coding is a computationally hard problem, for which different approximate solutions have been proposed. Methods that minimize the magnitude of the coefficients ('@?"1-norm') instead of minimizing the size of the active subset of features ('@?"0-norm') may find the optimal solutions, but they do not scale well with the problem size and use centralized algorithms. Iterative, greedy methods, on the other hand are fast, but require a priori knowledge of the number of non-zero features, often find suboptimal solutions and they converge to the final sparse form through a series of non-sparse representations. In this article we propose a neurally plausible algorithm which efficiently integrates an @?"0-norm based probabilistic sparse coding model with ideas inspired by novel iterative solutions. Furthermore, the resulting algorithm does not require an exactly defined sparseness level thus it is suitable for representing natural stimuli with a varying number of features. We demonstrate that our combined method can find optimal solutions in cases where other, @?"1-norm based algorithms already fail.