Communications of the ACM
A Necessary Condition for Learning from Positive Examples
Machine Learning
Machine Learning
Machine Learning
New developments in parsing technology
Partially distribution-free learning of regular languages from positive samples
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Extension of the PAC framework to finite and countable Markov chains
IEEE Transactions on Information Theory
Efficient, correct, unsupervised learning of context-sensitive languages
CoNLL '10 Proceedings of the Fourteenth Conference on Computational Natural Language Learning
Learning context free grammars with the syntactic concept lattice
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
Hi-index | 0.00 |
Indirect negative evidence is clearly an important way for learners to constrain over-generalisation, and yet a good learning theoretic analysis has yet to be provided for this, whether in a PAC or a probabilistic identification in the limit framework. In this paper we suggest a theoretical analysis of indirect negative evidence that allows the presence of ungrammatical strings in the input and also accounts for the relationship between grammaticality/acceptability and probability. Given independently justified assumptions about lower bounds on the probabilities of grammatical strings, we establish that a limited number of membership queries of some strings can be probabilistically simulated.