Elements of information theory
Elements of information theory
What is the goal of sensory coding?
Neural Computation
Sparse Approximate Solutions to Linear Systems
SIAM Journal on Computing
Feedback Control of Dynamic Systems
Feedback Control of Dynamic Systems
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
Atomic Decomposition by Basis Pursuit
SIAM Review
Dictionary learning algorithms for sparse representation
Neural Computation
The steerable pyramid: a flexible architecture for multi-scale derivative computation
ICIP '95 Proceedings of the 1995 International Conference on Image Processing (Vol. 3)-Volume 3 - Volume 3
Fast Atomic Decomposition by the Inhibition Method
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 3
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Efficient source detection using integrate-and-fire neurons
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Sparse Bayesian learning for basis selection
IEEE Transactions on Signal Processing
An affine scaling methodology for best basis selection
IEEE Transactions on Signal Processing
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Why Simple Shrinkage Is Still Relevant for Redundant Representations?
IEEE Transactions on Information Theory
De-noising by soft-thresholding
IEEE Transactions on Information Theory
An EM algorithm for wavelet-based image restoration
IEEE Transactions on Image Processing
Sparse overcomplete Gabor wavelet representation based on local competitions
IEEE Transactions on Image Processing
Auditory-inspired sparse representation of audio signals
Speech Communication
Sparse and silent coding in neural circuits
Neurocomputing
LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
Neural associative memories and sparse coding
Neural Networks
Object recognition using sparse representation of overcomplete dictionary
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part IV
Hi-index | 0.00 |
While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function. LCAs are designed to be implemented in a dynamical system composed of many neuron-like elements operating in parallel. These algorithms use thresholding functions to induce local (usually one-way) inhibitory competitions between nodes to produce sparse representations. LCAs produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation. Additionally, LCA coefficients for video sequences demonstrate inertial properties that are both qualitatively and quantitatively more regular (i.e., smoother and more predictable) than the coefficients produced by greedy algorithms.