Depth-first iterative-deepening: an optimal admissible tree search
Artificial Intelligence
Results on learnability and the Vapnick-Chervonenkis dimension
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Policies for the selection of bias in inductive machine learning
Policies for the selection of bias in inductive machine learning
An Efficient Algorithm for Finding Certain Minimum-Cost Procedures for Making Binary Decisions
Journal of the ACM (JACM)
Machine Learning
A General Framework for Induction and a Study of Selective Induction
Machine Learning
Inductive Strengthening: the Effects of a Simple Heuristic for Restricting Hypothesis Space Search
AII '92 Proceedings of the International Workshop on Analogical and Inductive Inference
Shift of bias for inductive concept learning
Shift of bias for inductive concept learning
Generating production rules from decision trees
IJCAI'87 Proceedings of the 10th international joint conference on Artificial intelligence - Volume 1
Hi-index | 0.01 |
Decisions made in setting up and running search programs bias the searches that they perform. Search bias refers to the definition of a search space and the definition of the program that navigates the space. This paper addresses the problem of using knowledge regarding the complexity of various syntactic search biases to form a policy for selecting bias. In particular, this paper shows that a simple policy, iterative weakening, is optimal or nearly optimal in cases where the biases can be ordered by computational complexity and certain relationships hold between the complexity of the various biases. The results are obtained by viewing bias selection as a (higher-level) search problem. Iterative weakening evaluates the states in order of increasing complexity. An offshoot of this work is the formation of a near-optimal policy for selecting both breadth and depth bounds for depth-first search with very large (possibly unbounded) breadth and depth.