Technical Note: \cal Q-Learning
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the issue of obtaining OWA operator weights
Fuzzy Sets and Systems
Soft combination of neural classifiers: a comparative study
Pattern Recognition Letters
A reinforcement learning model of selective visual attention
Proceedings of the fifth international conference on Autonomous agents
Multisensor Decision and Estimation Fusion
Multisensor Decision and Estimation Fusion
Q-learning of sequential attention for visual object recognition from informative local descriptors
ICML '05 Proceedings of the 22nd international conference on Machine learning
Journal of Cognitive Neuroscience
Journal of Cognitive Neuroscience
Machine learning: a review of classification and combining techniques
Artificial Intelligence Review
Choosing where to look next in a mutation sequence space
Bioinformatics
Optimal Local Basis: A Reinforcement Learning Approach for Face Recognition
International Journal of Computer Vision
Boosting k-nearest neighbor classifier by means of input space projection
Expert Systems with Applications: An International Journal
Constructing ensembles of classifiers by means of weighted instance selection
IEEE Transactions on Neural Networks
Online learning of task-driven object-based visual attention control
Image and Vision Computing
Budgeted learning of nailve-bayes classifiers
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Learning attentive fusion of multiple bayesian network classifiers
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Hi-index | 0.00 |
In this letter, we propose a learning system, active decision fusion learning (ADFL), for active fusion of decisions. Each decision maker, referred to as a local decision maker, provides its suggestion in the form of a probability distribution over all possible decisions. The goal of the system is to learn the active sequential selection of the local decision makers in order to consult with and thus learn the final decision based on the consultations. These two learning tasks are formulated as learning a single sequential decision-making problem in the form of a Markov decision process (MDP), and a continuous reinforcement learning method is employed to solve it. The states of this MDP are decisions of the attended local decision makers, and the actions are either attending to a local decision maker or declaring final decisions. The learning system is punished for each consultation and wrong final decision and rewarded for correct final decisions. This results in minimizing the consultation and decision-making costs through learning a sequential consultation policy where the most informative local decision makers are consulted and the least informative, misleading, and redundant ones are left unattended. An important property of this policy is that it acts locally. This means that the system handles any nonuniformity in the local decision maker's expertise over the state space. This property has been exploited in the design of local experts. ADFL is tested on a set of classification tasks, where it outperforms two well-known classification methods, Adaboost and bagging, as well as three benchmark fusion algorithms: OWA, Borda count, and majority voting. In addition, the effect of local experts design strategy on the performance of ADFL is studied, and some guidelines for the design of local experts are provided. Moreover, evaluating ADFL in some special cases proves that it is able to derive the maximum benefit from the informative local decision makers and to minimize attending to redundant ones.