Analogy-making as perception: a computer model
Analogy-making as perception: a computer model
Fast discovery of association rules
Advances in knowledge discovery and data mining
Case-Based Reasoning: Experiences, Lessons and Future Directions
Case-Based Reasoning: Experiences, Lessons and Future Directions
A Roadmap to Ontology Specification Languages
EKAW '00 Proceedings of the 12th European Workshop on Knowledge Acquisition, Modeling and Management
Transforming between propositions and features: bridging the gap
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
A key issue in artificial intelligence lies in finding the amount of input detail needed to do successful learning. Too much detail causes overhead and makes learning prone to over-fitting. Too little detail and it may not be possible to learn anything at all. The issue is particularly relevant when the inputs are relational case descriptions, and a very expressive vocabulary may also lead to inconsistent representations. For example, in the Whodunit Problem, the task is to form hypotheses about the identity of the perpetrator of an event described using relational propositions. The training data consists of arbitrary relational descriptions of many other similar cases. In this paper, we examine the possibility of translating the case descriptions into an alternative vocabulary which has a reduced number of predicates and therefore produces more consistent case descriptions. We compare how the reduced vocabulary affects three different learning algorithms: exemplar-based analogy, prototype-based analogy, and association rule learning. We find that it has a positive effect on some algorithms and a negative effect on others, which gives us insight into all three algorithms and indicates when reduced vocabularies might be appropriate.