A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Learning Belief Networks in the Presence of Missing Values and Hidden Variables
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Efficient Determination of Dynamic Split Points in a Decision Tree
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Learning with mixtures of trees
The Journal of Machine Learning Research
A Bayesian approach to learning Bayesian networks with local structure
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Modeling the manifolds of images of handwritten digits
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, we introduce and evaluate a data-driven staged mixture modeling tcchnique for building density, regression, and classification models. Our basic approach is to sequentially add components to a finite mixture model using the structural expectation maximization (SEM) algorithm. We show that our technique is qualitatively similar to boosting. This correspondence is a natural byproduct of the fact that we use the SEM algorithm to sequentially fit the mixture model. Finally, in our experimental evaluation, we demonstrate the effectiveness of our approach on a variety of prediction and density estimation tasks using real-world data.