Some greedy learning algorithms for sparse regression and classification with mercer kernels
The Journal of Machine Learning Research
Variable selection using svm based criteria
The Journal of Machine Learning Research
On the stability of the basis pursuit in the presence of noise
Signal Processing - Sparse approximations in signal and image processing
Analysis of SVM regression bounds for variable ranking
Neurocomputing
Sparse representations are most likely to be the sparsest possible
EURASIP Journal on Applied Signal Processing
Algorithms for subset selection in linear regression
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Optimal Solutions for Sparse Principal Component Analysis
The Journal of Machine Learning Research
Clustered subset selection and its applications on it service metrics
Proceedings of the 17th ACM conference on Information and knowledge management
A bidirectional greedy heuristic for the subspace selection problem
SLS'07 Proceedings of the 2007 international conference on Engineering stochastic local search algorithms: designing, implementing and analyzing effective heuristics
A fast method of feature extraction for kernel MSE
Neurocomputing
Pruning least objective contribution in KMSE
Neurocomputing
Hi-index | 0.00 |
The following linear inverse problem is considered: Given a full column rank m × n data matrix A, and a length m observation vector b, find the best least-squares solution to A x = b with at most r n nonzero components. The backward greedy algorithm computes a sparse solution to A x = b by removing greedily columns from A until r columns are left. A simple implementation based on a QR downdating scheme using Givens rotations is described. The backward greedy algorithm is shown to be optimal for the subset selection problem in the sense that it selects the "correct" subset of columns from A if the perturbation of the data vector b is small enough. The results generalize to any other norm of the residual.