Evolutionary product unit based neural networks for regression
Neural Networks
On Bayesian classification with Laplace priors
Pattern Recognition Letters
Sparse probabilistic classifiers
Proceedings of the 24th international conference on Machine learning
A three-stage framework for gene expression data analysis by L1-norm support vector regression
International Journal of Bioinformatics Research and Applications
Nonlinear Feature Selection by Relevance Feature Vector Machine
MLDM '07 Proceedings of the 5th international conference on Machine Learning and Data Mining in Pattern Recognition
A Regularized Framework for Feature Selection in Face Detection and Authentication
International Journal of Computer Vision
Sparse Kernel Learning and the Relevance Units Machine
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Partially supervised feature selection with regularized linear models
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
EfficientL1regularized logistic regression
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
A method for large-scale l1-regularized logistic regression
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Exponential family sparse coding with applications to self-taught learning
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Weight-decay regularization in reproducing Kernel Hilbert spaces by variable-basis schemes
WSEAS Transactions on Mathematics
A Fast Hybrid Algorithm for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
Selecting useful features for personal credit risk analysis
International Journal of Business Information Systems
The Journal of Machine Learning Research
Ensemble logistic regression for feature selection
PRIB'11 Proceedings of the 6th IAPR international conference on Pattern recognition in bioinformatics
Sparse multikernel support vector regression machines trained by active learning
Expert Systems with Applications: An International Journal
Recurrent sparse support vector regression machines trained by active learning in the time-domain
Expert Systems with Applications: An International Journal
Label-Noise robust logistic regression and its applications
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Multi-resolutive sparse approximations of d-dimensional data
Computer Vision and Image Understanding
A local information-based feature-selection algorithm for data regression
Pattern Recognition
Hi-index | 0.01 |
In the last few years, the support vector machine (SVM) method has motivated new interest in kernel regression techniques. Although the SVM has been shown to exhibit excellent generalization properties in many experiments, it suffers from several drawbacks, both of a theoretical and a technical nature: the absence of probabilistic outputs, the restriction to Mercer kernels, and the steep growth of the number of support vectors with increasing size of the training set. In this paper, we present a different class of kernel regressors that effectively overcome the above problems. We call this approach generalized LASSO regression. It has a clear probabilistic interpretation, can handle learning sets that are corrupted by outliers, produces extremely sparse solutions, and is capable of dealing with large-scale problems. For regression functionals which can be modeled as iteratively reweighted least-squares (IRLS) problems, we present a highly efficient algorithm with guaranteed global convergence. This defies a unique framework for sparse regression models in the very rich class of IRLS models, including various types of robust regression models and logistic regression. Performance studies for many standard benchmark datasets effectively demonstrate the advantages of this model over related approaches.