Bayesian Classification With Gaussian Processes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tissue classification with gene expression profiles
RECOMB '00 Proceedings of the fourth annual international conference on Computational molecular biology
AI Game Programming Wisdom
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
A Kronecker Product Representation of the Fast Gauss Transform
SIAM Journal on Matrix Analysis and Applications
Joint classifier and feature optimization for cancer diagnosis using gene expression data
RECOMB '03 Proceedings of the seventh annual international conference on Research in computational molecular biology
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Sparse Multinomial Logistic Regression: Fast Algorithms and Generalization Bounds
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayesian Gaussian Process Classification with the EM-EP Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
MILES: Multiple-Instance Learning via Embedded Instance Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature Subset Selection and Ranking for Data Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
A penalized criterion for variable selection in classification
Journal of Multivariate Analysis
Evolving a Bayesian classifier for ECG-based age classification in medical applications
Applied Soft Computing
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Incremental Relevance Vector Machine with Kernel Learning
SETN '08 Proceedings of the 5th Hellenic conference on Artificial Intelligence: Theories, Models and Applications
Sparse Bayesian modeling with adaptive kernel learning
IEEE Transactions on Neural Networks
Local Feature Selection for the Relevance Vector Machine Using Adaptive Kernel Learning
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Increasing classification robustness with adaptive features
ICVS'08 Proceedings of the 6th international conference on Computer vision systems
Environment adaptive 3D object recognition and pose estimation by cognitive perception engine
CIRA'09 Proceedings of the 8th IEEE international conference on Computational intelligence in robotics and automation
IEEE Transactions on Information Technology in Biomedicine - Special section on new and emerging technologies in bioinformatics and bioengineering
Assessment of the influence of adaptive components in trainable surface inspection systems
Machine Vision and Applications - Integrated Imaging and Vision Techniques for Industrial Inspection
Evolutionary-rough feature selection for face recognition
Transactions on rough sets XII
IEEE Transactions on Information Technology in Biomedicine - Special section on affective and pervasive computing for healthcare
The Journal of Machine Learning Research
Hierarchical audio content classification system using an optimal feature selection algorithm
Multimedia Tools and Applications
Conditional infomax learning: an integrated framework for feature extraction and fusion
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Fast sparse multinomial regression applied to hyperspectral data
ICIAR'06 Proceedings of the Third international conference on Image Analysis and Recognition - Volume Part II
SVM classifier incorporating feature selection using GA for spam detection
EUC'05 Proceedings of the 2005 international conference on Embedded and Ubiquitous Computing
Accurate Prediction of Coronary Artery Disease Using Reliable Diagnosis System
Journal of Medical Systems
Hi-index | 0.14 |
This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation-maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.