Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Machine Learning
Averaging regularized estimators
Neural Computation
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning - Special issue on learning with probabilistic representations
A tutorial on learning with Bayesian networks
Learning in graphical models
Machine Learning
Bayesian Error-Bars for Belief Net Inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Estimation and use of uncertainty in pseudo-relevance feedback
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Neighborhood-Based Local Sensitivity
ECML '07 Proceedings of the 18th European conference on Machine Learning
Improved mean and variance approximations for belief net responses via network doubling
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Learning to combine discriminative classifiers: confidence based
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
MLMI'11 Proceedings of the Second international conference on Machine learning in medical imaging
Hi-index | 0.00 |
Many of today's best classification results are obtained by combining the responses of a set of base classifiers to produce an answer for the query. This paper explores a novel "query specific" combination rule: After learning a set of simple belief network classifiers, we produce an answer to each query by combining their individual responses, using weights based inversely on their respective variances around their responses. These variances are based on the uncertainty of the network parameters, which in turn depend on the training datasample. In essence, this variance quantifies the base classifier's confidence of its response to this query. Our experimental results show that these "mixture-using-variance belief net classifiers" MUVS work effectively, especially when the base classifiers are learned using balanced bootstrap samples and when their results are combined using James-Stein shrinkage. We also found that our variance-based combination rule performed better than both bagging and AdaBoost, even on the set of base classifiers produced by AdaBoost itself. Finally, this framework is extremely efficient, as both the learning and the classification components require only straight-line code.