Neural networks and the bias/variance dilemma
Neural Computation
Original Contribution: Stacked generalization
Neural Networks
Qualitative reasoning: modeling and simulation with incomplete knowledge
Qualitative reasoning: modeling and simulation with incomplete knowledge
Artificial Intelligence Review - Special issue on lazy learning
Machine Learning
Combining Classifiers with Meta Decision Trees
Machine Learning
Qualitative reverse engineering
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
A Unifeid Bias-Variance Decomposition and its Applications
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Qualitatively faithful quantitative prediction
Artificial Intelligence
Hi-index | 0.00 |
Usual numerical learning methods are primarily concerned with finding a good numerical fit to data and often make predictions that do not correspond to qualitative laws in the domain of modelling or expert intuition. In contrast, the idea of Q2 learning is to induce qualitative constraints from training data, and use the constraints to guide numerical regression. The resulting numerical predictions are consistent with a learned qualitative model which is beneficial in terms of explanation of phenomena in the modelled domain, and can also improve numerical accuracy. This paper proposes a method for combining the learning of qualitative constraints with an arbitrary numerical learner and explores the accuracy and explanation benefits of learning monotonic qualitative constraints in a number of domains. We show that Q2 learning can correct for errors caused by the bias of the learning algorithm and discuss the potentials of similar hierarchical learning schemes.