Learning probabilistic automata with variable memory length

  • Authors:
  • Dana Ron;Yoram Singer;Naftali Tishby

  • Affiliations:
  • Institute of Computer Science and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel;Institute of Computer Science and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel;Institute of Computer Science and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel

  • Venue:
  • COLT '94 Proceedings of the seventh annual conference on Computational learning theory
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Finite Suffix Automata. The learning algorithm is motivated by real applications in man-machine interaction such as hand-writing and speech recognition. Conventionally used fixed memory Markov and hidden Markov models have either severe practical or theoretical drawbacks. Though general hardness results are known for learning distributions generated by sources with similar structure, we prove that our algorithm can indeed efficiently learn distributions generated by our more restricted sources. In Particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made small with high confidence in polynomial time and sample complexity. We demonstrate the applicability of our algorithm by learning the structure of natural English text and using our hypothesis for the correction of corrupted text.