Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Boosted mixture of experts: an ensemble learning scheme
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Bagging and Boosting with Dynamic Integration of Classifiers
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Task Decomposition Through Competition in a Modular Connectionist
Task Decomposition Through Competition in a Modular Connectionist
Adaptive mixtures of local experts
Neural Computation
Computer Vision and Image Understanding
A competitive ensemble pruning approach based on cross-validation technique
Knowledge-Based Systems
Hi-index | 0.00 |
The Mixture of Experts (ME) is one of the most popular ensemble methods used in pattern recognition and machine learning This algorithm stochastically partitions the input space of the problem into a number of subspaces, experts becoming specialized on each subspace The ME uses an expert called gating network to manage this process, which is trained together with the experts In this paper, we propose a modified version of the ME algorithm which first partitions the original problem into centralized regions and then uses a simple distance-based gating function to specialize the expert networks Each expert contributes to classify an input sample according to the distance between the input and a prototype embedded by the expert As a result, an accurate classifier with shorter training time and smaller number of parameters is achieved Experimental results on a binary toy problem and selected datasets from the UCI machine learning repository show the robustness of the proposed method compared to the standard ME model.