Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Computer-based probabilistic-network construction
Computer-based probabilistic-network construction
Bayesian networks for knowledge discovery
Advances in knowledge discovery and data mining
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Introduction to inference for Bayesian networks
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
A tutorial on learning with Bayesian networks
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
Introduction to Bayesian Networks
Introduction to Bayesian Networks
A Guide to the Literature on Learning Probabilistic Networks from Data
IEEE Transactions on Knowledge and Data Engineering
Optimal structure identification with greedy search
The Journal of Machine Learning Research
An introduction to variable and feature selection
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Comparing Bayesian network classifiers
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Corpus callosum MR image classification
Knowledge-Based Systems
APWeb/WAIM'07 Proceedings of the joint 9th Asia-Pacific web and 8th international conference on web-age information management conference on Advances in data and web management
Hi-index | 0.01 |
We propose a Bayesian-network classifier with inverse-tree structure (BNCIT) for joint classification and variable selection. The problem domain of voxelwise magnetic-resonance image analysis often involves millions of variables but only dozens of samples. Judicious variable selection may render classification tractable, avoid over-fitting, and improve classifier performance. BNCIT embeds the variable-selection process within the classifier-training process, which makes this algorithm scalable. BNCIT is based on a Bayesian-network model with inverse-tree structure, i.e., the class variable C is a leaf node, and predictive variables are parents of C; thus, the classifier-training process returns a parent set for C, which is a subset of the Markov blanket of C. BNCIT uses voxels in the parent set, and voxels that are probabilistically equivalent to them, as variables for classification of new image data. Since the data set has a limited number of samples, we use the jackknife method to determine whether the classifier generated by BNCIT is a statistical artifact. In order to enhance stability and improve classification accuracy, we model the state of the probabilistically equivalent voxels with a latent variable. We employ an efficient method for determining states of hidden variables, thus reducing dramatically the computational cost of model generation. Experimental results confirm the accuracy and efficiency of BNCIT.