An action language based on causal explanation: preliminary report
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Inference of Reversible Languages
Journal of the ACM (JACM)
Regular Grammatical Inference from Positive and Negative Samples by Genetic Search: the GIG Method
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
What Is the Search Space of the Regular Inference?
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
Learning Programs in the Event Calculus
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Theory Completion Using Inverse Entailment
ILP '00 Proceedings of the 10th International Conference on Inductive Logic Programming
Artificial Intelligence - Special issue on logical formalizations and commonsense reasoning
Using theory completion to learn a robot navigation control program
ILP'02 Proceedings of the 12th international conference on Inductive logic programming
Hypothesizing about causal networks with positive and negative effects by meta-level abduction
ILP'10 Proceedings of the 20th international conference on Inductive logic programming
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Completing causal networks by meta-level abduction
Machine Learning
Learning from interpretation transition
Machine Learning
Hi-index | 0.00 |
Recent work on representing action and change has introduced high-level action languages which describe the effects of actions as causal laws in a declarative way. In this paper, we propose an algorithm to induce the effects of actions from an incomplete domain description and observations after executing action sequences, all of which are represented in the action language $\mathcal{A}$. Our induction algorithm generates effect propositions in $\mathcal{A}$ based on regular inference, i.e., an algorithm to learn finite automata. As opposed to previous work on learning automata from scratch, we are concerned with explanatory induction which accounts for observations from background knowledge together with induced hypotheses. Compared with previous approaches in ILP, an observation input to our induction algorithm is not restricted to a narrative but can be any fact observed after executing a sequence of actions. As a result, induction of causal laws can be formally characterized within action languages.