Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Inductive inference of approximations
Information and Control
A study of inductive inference machines
A study of inductive inference machines
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Inductive inference with additional information
Journal of Computer and System Sciences
Characterization Problems in the Theory of Inductive Inference
Proceedings of the Fifth Colloquium on Automata, Languages and Programming
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Tradeoffs in machine inductive inference
Tradeoffs in machine inductive inference
Hi-index | 0.01 |
The presence of an "infinitely-often correct teacher" in scientific inference and language acquisition is motivated and studied. The treatment is abstract. In the practice of science, a scientist performs experiments to gather experimental data about some phenomenon, and then tries to construct an explanation (or theory) for the phenomenon. A model for the practice of science is an inductive inference machine (a scientist) learning a program (an explanation) from the graph (set of experiments) of a recursive function (phenomenon). It is argued that this model of science is not an adequate one as scientists, in addition to performing experiments, make use of some approximate explanation (based on the "state of the art") about the phenomenon under investigation. An attempt has been made to model this approximate explanation as an additional information in the scientific process. It is shown that inference power of machines is improved in the presence of an approximate explanation. The quality of this approximate information is modeled using certain "density" notions. It is shown that additional information about a "better" quality approximate explanation enhances the inference power of learning machines as scientists more than a "not so good" approximate explanation. Inadequacies in Gold's paradigm of language learning are investigated. It is argued that Gold's model fails to incorporate any additional information that children get from their environment. Children are sometimes told about some grammatical rule that enumerates elements of the language. These rules are some sort of additional information. Also, children are being given some information about what is not in the language. Sometimes, they are rebuked for making incorrect utterances or are told of a rule that enumerates certain non-elements of the language. An attempt has been made to extend Gold's model to incorporate both these kinds of additional information. It is shown that either type of additional information enhances the learning power of formal language learning devices.