Design by exmple: An application of Armstrong relations
Journal of Computer and System Sciences
Making believers out of computers
Artificial Intelligence
A theory of diagnosis from first principles
Artificial Intelligence
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Model-preference default theories
Artificial Intelligence
Hard problems for simple default logics
Artificial Intelligence - Special issue on knowledge representation
On selecting a satisfying truth assignment (extended abstract)
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Tractable default reasoning
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Horn approximations of empirical data
Artificial Intelligence
Learning to reason with a restricted view
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
On the hardness of approximate reasoning
Artificial Intelligence
On the Structure of Armstrong Relations for Functional Dependencies
Journal of the ACM (JACM)
Exact learning via the Monotone theory
SFCS '93 Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science
Learning to reason the non monotonic case
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Translating between Horn representations and their characteristic models
Journal of Artificial Intelligence Research
A connectionist framework for reasoning: reasoning with examples
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Substitutional definition of satisfiability in classical propositional logic
SAT'05 Proceedings of the 8th international conference on Theory and Applications of Satisfiability Testing
Hi-index | 0.00 |
Reasoning with model-based representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formula-based representations of knowledge. In this paper we present more evidence to the utility of such representations. In real life situations, one normally completes a lot of missing "context" information when answering queries. We model this situation by augmenting the available knowledge about the world with context-specific information; we show that reasoning with model-based representations can be done efficiently in the presence of varying context information. We then consider the task of default reasoning. We show that default reasoning is a generalization of reasoning within context, in which the reasoner has many "context" rules, which may be conflicting. We characterize the cases in which model-based reasoning supports efficient default reasoning and develop algorithms that handle efficiently fragments of Reiter's default logic. In particular, this includes cases in which performing the default reasoning task with the traditional, formula-based, representation is intractable. Further, we argue that these results support an incremental view of reasoning in a natural way.