Mind change optimal learning: theory and applications

  • Authors:
  • Wei Luo

  • Affiliations:
  • Simon Fraser University (Canada)

  • Venue:
  • Mind change optimal learning: theory and applications
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning theories play a significant role to machine learning as computability and complexity theories to software engineering. Gold's language learning paradigm is one cornerstone of modern learning theories. The aim of this thesis is to establish an inductive principle in Gold's language learning paradigm to guide the design of machine learning algorithms. We follow the common practice of using the number of mind changes to measure complexity of Gold's language learning problems, and study efficient learning with respect to mind changes. Our starting point is the idea that a learner that is efficient with respect to mind changes minimizes mind changes not only globally in the entire learning problem, but also locally in subproblems after receiving some evidence. Formalizing this idea leads to the notion of mind change optimality. We characterize mind change complexity of language collections with Cantor's classic concept of accumulation order. We show that the characteristic property of mind change optimal learners is that they output conjectures (languages) with maximal accumulation order. Therefore, we obtain an inductive principle in Gold's language learning paradigm based on the simple topological concept accumulation order. The new inductive principle enables the analysis of the practical problem of learning Bayes net structure in the rich theoretical framework of Gold's learning paradigm. Bayes net is one of the most prominent formalisms for knowledge representation and probabilistic and causal reasoning. Applying the inductive principle of mind change optimality leads to a unique fastest mind change optimal Bayes net learner. This learner conjectures a graph if it is a unique minimal "independence map", and outputs "no guess" otherwise. As exact implementation of the fast, mind change optimal learner for learning Bayes net structure is NP-hard, mind change optimality can be approximated with a hybrid criterion for learning Bayes net structure. The criterion combines search based on a scoring function with information from statistical tests. We show how to adapt local search algorithms to incorporate the new criterion. Simulation studies provide evidence that one such new algorithm leads to substantially improved structure on small to medium samples. Keywords: Learning theory, inductive inference, mind change, accumulation order, inductive principle, Bayes net, conditional independence, constraint-based learning, score-based learning