Learning power and language expressiveness

  • Authors:
  • Eric Martin;Arun Sharma;Frank Stephan

  • Affiliations:
  • School of Computer Science and Engineering, UNSW Sydney NSW 2052, Australia;School of Computer Science and Engineering, UNSW Sydney NSW 2052, Australia;Mathematisches Institut, Universität Heidelberg, Im Neuenheimer Feld 294, 69120 Heidelberg, Germany

  • Venue:
  • Theoretical Computer Science - Australasian computer science
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The topic of the present work is to study the relationship between the power of the learning algorithms on the one hand, and the expressive power of the logical language which is used to represent the problems to be learned on the other hand. The central question is whether enriching the language results in more learning power. In order to make the question relevant and nontrivial, it is required that both texts (sequences of data) and hypotheses (guesses) be translatable from the "rich" language into the "poor" one. The issue is considered for several logical languages suitable to describe structures whose domain is the set of natural numbers. It is shown that enriching the language does not give any advantage for those languages which define a monadic second-order language being decidable in the following sense: there is a fixed interpretation in the structure of natural numbers such that the set of sentences of this extended language true in that structure is decidable. But enriching the original language even by only one constant gives an advantage if this language contains a binary function symbol (which will be interpreted as addition). Furthermore, it is shown that behaviourally correct learning has exactly the same power as learning in the limit for those languages which define a monadic second-order language with the property given above, but has more power in case of languages containing a binary function symbol. Adding the natural requirement that the set of all structures to be learned is recursively enumerable, it is shown that it pays off to enrich the language of arithmetics for both finite learning and learning in the limit, but it does not pay off to enrich the language for behaviourally correct learning.