Inductive logic programming and learnability
ACM SIGART Bulletin
Machine Learning
Interactive theory revision: an inductive logic programming approach
Interactive theory revision: an inductive logic programming approach
The nature of statistical learning theory
The nature of statistical learning theory
Inductive logic programming and knowledge discovery in databases
Advances in knowledge discovery and data mining
A Multistrategy Approach to Relational Knowledge Discovery inDatabases
Machine Learning - Special issue on multistrategy learning
A Machine-Oriented Logic Based on the Resolution Principle
Journal of the ACM (JACM)
Advances in Inductive Logic Programming
Advances in Inductive Logic Programming
Concept Formation and Knowledge Revision
Concept Formation and Knowledge Revision
Feature Extraction, Construction and Selection: A Data Mining Perspective
Feature Extraction, Construction and Selection: A Data Mining Perspective
Making Robots Smarter: Combining Sensing and Action through Robot Learning
Making Robots Smarter: Combining Sensing and Action through Robot Learning
An Region-Based Learning Approach to Discovering Temporal Structures in Data
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Complexity of Computing Vapnik-Chervonenkis Dimension
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
Identifying and Using Patterns in Sequential Data
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
Hi-index | 0.00 |
Designing the representation languages for the input and output of a learning algorithm is the hardest task within machine learning applications. Transforming the given representation of observations into a well-suited language LE may ease learning such that a simple and efficient learning algorithm can solve the learning problem. Learnability is defined with respect to the representation of the output of learning, LH. If the predictive accuracy is the only criterion for the success of learning, the choice of LH means to find the hypothesis space with most easily learnable concepts, which contains the solution. Additional criteria for the success of learning such as comprehensibility and embeddedness may ask for transformations of LH such that users can easily interpret and other systems can easily exploit the learning results. Designing a language LH that is optimal with respect to all the criteria is too difficult a task. Instead, we design families of representations, where each family member is well suited for a particular set of requirements, and implement transformations between the representations. In this paper, we discuss a representation family of Horn logic. Work on tailoring representations is illustrated by a robot application.