Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Mixing independent classifiers
Proceedings of the 9th annual conference on Genetic and evolutionary computation
UCSpv: principled voting in UCS rule populations
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Classifier fitness based on accuracy
Evolutionary Computation
Bayesian hierarchical mixtures of experts
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
In this paper we explicitly identify the probabilistic model underlying LCS by linking it to a generalisation of the common Mixture-of-Experts model. Having an explicit representation of the model not only puts LCS on a strong statistical foundation and identifies the assumptions that the model makes about the data, but also allows us to use off-the-shelf training methods to train it. We show how to exploit this advantage by embedding the LCS model into a fully Bayesian framework that results in an objective function for a set of classifiers, effectively turning the LCS training into a principled optimisation task. A set of preliminary experiments demonstrate the feasibility of this approach.