Design of a linguistic postprocessor using variable memory length Markov models

  • Authors:
  • I. Guyon;F. Pereira

  • Affiliations:
  • -;-

  • Venue:
  • ICDAR '95 Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a linguistic postprocessor for character recognizers. The central module of our system is a trainable variable memory length Markov model (VLMM) that predicts the next character given a variable length window of past characters. The overall system is composed of several finite state automata, including the main VLMM and a proper noun VLMM. The best model reported in the literature (Brown et al., 1992) achieves 1.75 bits per character on the Brown corpus. On that same corpus, our model, trained on 10 times less data, reaches 2.19 bits per character and is 200 times smaller (/spl sime/160,000 parameters). The model was designed for handwriting recognition applications but could also be used for other OCR problems and speech recognition.