Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Learning from good and bad data
Learning from good and bad data
LIME: A System for Learning Relations
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
Learning from good data and bad
Learning from good data and bad
Induction in first order logic from noisy training examples and fixed example set sizes
Induction in first order logic from noisy training examples and fixed example set sizes
ILP with noise and fixed example size: a Bayesian approach
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Hi-index | 0.00 |
The approach used to assess a learning algorithm should reflect the type of environment we place the algorithm within. Often learners are given examples that both contain noise and are governed by a particular distribution. Hence, probabilistic identification in the limit is an appropriate tool for assessing such learners. In this paper we introduce an exact notion of probabilistic identification in the limit based on Laird's thesis. The strategy presented incorporates a variety of learning situations including: noise free positive examples, noisy independently generated examples, and noise free with both positive and negative examples. This yields a useful technique for assessing the effectiveness of a learner when training data is governed by a distribution and is possibly noisy. An attempt has been made to give a preliminary theoretical evaluation of the Q-heuristic. To this end, we have shown that a learner using the Q-heuristic stochastically learns in the limit any finite class of concepts, even when noise is present in the training examples. This result is encouraging, because with enough data, there is the expectation that the learner will induce a correct hypothesis. The proof of this result is extended to show that a restricted infinite class of concepts can also be stochastically learnt in the limit. The restriction requires the hypothesis space to be g-sparse.