A Theory for Multiresolution Signal Decomposition: The Wavelet Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Machine Learning
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Predictive automatic relevance determination by expectation propagation
ICML '04 Proceedings of the twenty-first international conference on Machine learning
On Bayesian classification with Laplace priors
Pattern Recognition Letters
Surrogate maximization/minimization algorithms and extensions
Machine Learning
Sparse Bayesian nonparametric regression
Proceedings of the 25th international conference on Machine learning
Probabilistic classification vector machines
IEEE Transactions on Neural Networks
Non-sparse multiple kernel fisher discriminant analysis
The Journal of Machine Learning Research
Probit classifiers with a generalized Gaussian scale mixture prior
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Hi-index | 0.01 |
Most of the existing probabilistic classifiers are based on sparsity-inducing modeling. However, we show that sparsity is not always desirable in practice, and only an appropriate degree of sparsity is profitable. In this work, we propose a flexible probabilistic model using a generalized Gaussian scale mixture (GGSM) prior that can provide an appropriate degree of sparsity for its model parameters, and yield either sparse or non-sparse estimates according to the intrinsic sparsity of features in a dataset. Model learning is carried out by an efficient modified maximum a posterior (MAP) estimate. We also show relationships of the proposed model to existing probabilistic classifiers as well as iteratively re-weighted l"1 and l"2 minimizations. We then study different types of likelihood working with the GGSM prior in kernel-based setup, based on which an improved kernel-based GGIG is presented. Experiments demonstrate that the proposed method has better or comparable performances in linear classifiers as well as in kernel-based classification.