SOAR: an architecture for general intelligence
Artificial Intelligence
Explanation-based generalisation = partial evaluation
Artificial Intelligence
Explanation-based learning: a problem solving perspective
Artificial Intelligence
A Proof Procedure Using Connection Graphs
Journal of the ACM (JACM)
Learning effective search control knowledge: an explanation-based approach
Learning effective search control knowledge: an explanation-based approach
Towards a general framework for composing disjunctive and iterative macro-operators
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
Incorporating redundant learned rules: a preliminary formal analysis of EBL
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
Multiple dimensions of generalization in model-based troubleshooting
AAAI'93 Proceedings of the eleventh national conference on Artificial intelligence
Hi-index | 0.01 |
Explanation-Based Learning (EBL) fails to accelerate problem solving in some problem spaces. How do these problem spaces differ from the ones in Minton's experiments [1988b]? Can minute modifications to problem space encoding drastically alter EBL's performance? Will PRODIGY/EBL'S success scale to real-world domains? This paper presents a formal theory of problem space structure that answers these questions. The central observation is that PRODIGY/EBL relies on finding nonrecursive explanations of PRODIGY'S problem-solving behavior. The theory explains and predicts PRODIGY/EBL'S performance in a wide range of problem spaces. The theory also predicts that a static program transformer, called STATIC, can match PRODIGY/EBL'S performance in some cases. The paper reports on an array of experiments that confirms this prediction. STATIC matches PRODIGY/EBL'S performance in each of Minton's problem spaces.