Technical Note: \cal Q-Learning
Machine Learning
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved Generalization Through Explicit Optimization of Margins
Machine Learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
The Relaxed Online Maximum Margin Algorithm
Machine Learning
Boosting the margin: A new explanation for the effectiveness of voting methods
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Reinforcement Learning in POMDP's via Direct Gradient Ascent
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Adaptive mixtures of local experts
Neural Computation
Computational Statistics & Data Analysis
A hybrid data-fusion system using modal data and probabilistic neural network for damage detection
Advances in Engineering Software
Ensemble pruning using reinforcement learning
SETN'06 Proceedings of the 4th Helenic conference on Advances in Artificial Intelligence
DCPE co-training for classification
Neurocomputing
Hi-index | 0.01 |
Ensemble algorithms can improve the performance of a given learning algorithm through the combination of multiple base classifiers into an ensemble. In this paper, we attempt to train and combine the base classifiers using an adaptive policy. This policy is learnt through a Q-learning inspired technique. Its effectiveness for an essentially supervised task is demonstrated by experimental results on several UCI benchmark databases.