The use of the L-curve in the regularization of discrete ill-posed problems
SIAM Journal on Scientific Computing
Machine Learning
An equivalence between sparse approximation and support vector machines
Neural Computation
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Multidimensional binary search trees used for associative searching
Communications of the ACM
Spectral processing of point-sampled geometry
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
SMI '01 Proceedings of the International Conference on Shape Modeling & Applications
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
IEEE Transactions on Pattern Analysis and Machine Intelligence
Surface compression with geometric bandelets
ACM SIGGRAPH 2005 Papers
Object correspondence as a machine learning problem
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning Overcomplete Representations
Neural Computation
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Scalable training of L1-regularized log-linear models
Proceedings of the 24th international conference on Machine learning
Kernels, regularization and differential equations
Pattern Recognition
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
An affine scaling methodology for best basis selection
IEEE Transactions on Signal Processing
Sparse signal reconstruction from limited data using FOCUSS: are-weighted minimum norm algorithm
IEEE Transactions on Signal Processing
Subset selection in noise based on diversity measure minimization
IEEE Transactions on Signal Processing
Matching pursuits with time-frequency dictionaries
IEEE Transactions on Signal Processing
Uncertainty principles and ideal atomic decomposition
IEEE Transactions on Information Theory
A generalized uncertainty principle and sparse representation in pairs of bases
IEEE Transactions on Information Theory
Sparse representations in unions of bases
IEEE Transactions on Information Theory
On sparse representations in arbitrary redundant bases
IEEE Transactions on Information Theory
Stable recovery of sparse overcomplete representations in the presence of noise
IEEE Transactions on Information Theory
Just relax: convex programming methods for identifying sparse signals in noise
IEEE Transactions on Information Theory
Sparse geometric image representations with bandelets
IEEE Transactions on Image Processing
IEEE Transactions on Neural Networks
Surface- and volume-based techniques for shape modeling and analysis
SIGGRAPH Asia 2013 Courses
Hi-index | 0.00 |
This paper proposes an iterative computation of sparse representations of functions defined on R^d, which exploits a formulation of the sparsification problem equivalent to Support Vector Machine and based on Tikhonov regularization. Through this equivalent formulation, the sparsification reduces to an approximation problem with a Tikhonov regularizer, which selects the null coefficients of the resulting approximation. The proposed multi-resolutive sparsification achieves a different resolution in the approximation of the input data through a hierarchy of nested approximation spaces. The idea behind our approach is to combine a smooth and strictly convex approximation of the l"1-norm with Tikhonov regularization and iterative solvers of linear/non-linear equations. Firstly, the iterative sparsification scheme is introduced in a Reproducing Kernel Hilbert Space with respect to its native norm. Then, the sparsification is generalized to arbitrary function spaces using the least-squares norm and radial basis functions. Finally, the discrete sparsification is derived using the eigendecomposition and the spectral properties of sparse matrices; in this case, the computational cost is O(nlogn), with n number of input points. Assuming that the data is supported on a (d-1)-dimensional manifold, we derive a variant of the sparsification scheme that guarantees the smoothness of the solution in the ambient and intrinsic space by using spectral graph theory and manifold learning techniques. Finally, we discuss the multi-resolutive approximation of d-dimensional data such as signals, images, and 3D shapes.