Communications of the ACM
Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension
STOC '86 Proceedings of the eighteenth annual ACM symposium on Theory of computing
Classifier systems and genetic algorithms
Artificial Intelligence
The mathematical foundations of learning machines
The mathematical foundations of learning machines
What size net gives valid generalization?
Advances in neural information processing systems 1
A general lower bound on the number of examples needed for learning
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Approximation and estimation bounds for artificial neural networks
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
Population-Based Learning: A Method for Learning from Examples Under Resource Constraints
IEEE Transactions on Knowledge and Data Engineering
Genetics-Based Learning of New Heuristics: Rational Scheduling of Experiments and Generalization
IEEE Transactions on Knowledge and Data Engineering
Explanation-Based Generalization: A Unifying View
Machine Learning
Explanation-Based Learning: An Alternative View
Machine Learning
Properties of the Bucket Brigade
Proceedings of the 1st International Conference on Genetic Algorithms
Temporal credit assignment in reinforcement learning
Temporal credit assignment in reinforcement learning
Automated learning for reducing the configuration of a feedforward neural network
IEEE Transactions on Neural Networks
Bayesian space conceptualization and place classification for semantic maps in mobile robotics
Robotics and Autonomous Systems
Hi-index | 0.01 |
In this paper, we define the generalization problem, summarize various approaches in generalization, identify the credit assignment problem, and present the problem and some solutions in measuring generalizability. We discuss anomalies in the ordering of hypotheses in a subdomain when performance is normalized and averaged, and show conditions under which anomalies can be eliminated. To generalize performance across subdomains, we present a measure called probability of win that measures the probability whether one hypothesis is better than another. Finally, we discuss some limitations in using probabilities of win and illustrate their application in finding new parameter values for TimberWolf, a package for VLSI cell placement and routing.