Experience, generations, and limits in machine learning

  • Authors:
  • Mark Burgin;Allen Klinger

  • Affiliations:
  • Department of Computer Science, University of California, Los Angeles, 405 Hilgard Ave., Los Angeles, CA;Department of Computer Science, University of California, Los Angeles, 405 Hilgard Ave., Los Angeles, CA

  • Venue:
  • Theoretical Computer Science - Super-recursive algorithms and hypercomputation
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper extends traditional models of machine learning beyond their one-level structure by introducing previously obtained problem knowledge into the algorithm or automaton involved. Some authors studied more advanced than traditional models that utilize some kind of predetermined knowledge, having a two-level structure. However, even in this case, the model has not reflected the source and inherited properties of predetermined knowledge. In society, knowledge is often transmitted from previous generations. The aim of this paper is to construct and study algorithmic models of learning processes that utilize predetermined or prior knowledge. The models use recursive, subrecursive, and super-recursive algorithms. Predetermined knowledge includes: a text description, activity rules (e.g., for cognition), and specific structured personal or social memory. Algorithmic models represent these three forms as separate structured processing systems: automata with (1) advice; (2) structured program; and (3) structured memory. That yields three basic models for learning systems: polynomially bounded turing machines, Turing machines, and inductive Turing machines of the first order.