SOAR: an architecture for general intelligence
Artificial Intelligence
Principles of artificial intelligence
Principles of artificial intelligence
Logical foundations of artificial intelligence
Logical foundations of artificial intelligence
Controlling backward inference
Artificial Intelligence
Explanation-based learning: a problem solving perspective
Artificial Intelligence
On the sample complexity of finding good search strategies
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Using distribution-free learning theory to analyze chunking
Proceedings of the eighth biennial conference of the Canadian Society for Computational Studies of Intelligence on CSCSI-90
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
A Critical Look at Experimental Evaluations of EBL
Machine Learning
Finding optimal derivation in redundant knowledge bases
Artificial Intelligence
Proceedings of the workshop on Computational learning theory and natural learning systems (vol. 2) : intersections between theory and experiment: intersections between theory and experiment
Learning Search Control Knowledge: An Explanation-Based Approach
Learning Search Control Knowledge: An Explanation-Based Approach
Solving Time-Dependent Planning Problems
Solving Time-Dependent Planning Problems
Incorporating redundant learned rules: a preliminary formal analysis of EBL
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
Measuring and improving the effectiveness of representations
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
Probabilistic exploration in planning while learning
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Many "learning from experience" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE' that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set of problems can degrade performance on other sets, the new PE' is not always better than the original PE; this depends on the distribution of problems. We therefore seek the performance element whose expected performance, over this distribution, is optimal. Unfortunately. the actual distribution. which is needed to determine which element is optimal is usually not known. Moreover, the task of finding the optimal element, even knowing the distribution, is intractable for most interesting spaces of elements. This paper presents a method, PALO, that side-steps these problems by using a set of samples to estimate the unknown distribution, and by using a set of transformations to hill-climb to a local optimum. This process is based on a mathematically rigorous form of utility analysis: in particular, it uses statistical techniques to determine whether the result of a proposed transformation will be better than the original system. We also present an efficient way of implementing this learning system in the context of a general class of performance elements, and include empirical evidence that this approach can work effectively.