Note on learning rate schedules for stochastic optimization
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Ten lectures on wavelets
Dictionary learning algorithms for sparse representation
Neural Computation
General design algorithm for sparse frame expansions
Signal Processing
Digital Signal Processing
Method of optimal directions for frame design
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 05
Online dictionary learning for sparse coding
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Dictionary learning for sparse approximations with the majorization method
IEEE Transactions on Signal Processing
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
Sparse signal reconstruction from limited data using FOCUSS: are-weighted minimum norm algorithm
IEEE Transactions on Signal Processing
An augmented Lagrangian approach to general dictionary learning for image denoising
Journal of Visual Communication and Image Representation
Dictionary learning for image prediction
Journal of Visual Communication and Image Representation
Online dictionary learning algorithm with periodic updates and its application to image denoising
Expert Systems with Applications: An International Journal
Online Dictionary Learning Based Intra-frame Video Coding
Wireless Personal Communications: An International Journal
Hi-index | 35.69 |
We present the recursive least squares dictionary learning algorithm, RLS-DLA, which can be used for learning overcomplete dictionaries for sparse signal representation. Most DLAs presented earlier, for example ILS-DLA and K-SVD, update the dictionary after a batch of training vectors has been processed, usually using the whole set of training vectors as one batch. The training set is used iteratively to gradually improve the dictionary. The approach in RLS-DLA is a continuous update of the dictionary as each training vector is being processed. The core of the algorithm is compact and can be effectively implemented. The algorithm is derived very much along the same path as the recursive least squares (RLS) algorithm for adaptive filtering. Thus, as in RLS, a forgetting factor λ can be introduced and easily implemented in the algorithm. Adjusting λ in an appropriate way makes the algorithm less dependent on the initial dictionary and it improves both convergence properties of RLS-DLA as well as the representation ability of the resulting dictionary. Two sets of experiments are done to test different methods for learning dictionaries. The goal of the first set is to explore some basic properties of the algorithm in a simple setup, and for the second set it is the reconstruction of a true underlying dictionary. The first experiment confirms the conjectural properties from the derivation part, while the second demonstrates excellent performance.