On the Influence of Vocabulary Size and Language Models in Unconstrained Handwritten Text Recognition

  • Authors:
  • Affiliations:
  • Venue:
  • ICDAR '01 Proceedings of the Sixth International Conference on Document Analysis and Recognition
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Abstract: In this paper we present a system for unconstrained hand-written text recognition. The system consists of three components: preprocessing, feature extraction and recognition. In the preprocessing phase, a page of handwritten text is divided into its lines and the writing is normalized by means of skew and slant correction, positioning and scaling. From a normalized text line image, features are extracted using a sliding window technique. From each position of the window nine geometrical features are computed. The core of the system, the recognizer, is based on hidden Markov models. For each individual character, a model is provided. The character models are concatenated to words using a vocabulary. Moreover, the word models are concatenated to models that represent full lines of text. Thus the difficult problem of segmenting a line of text into its individual words can be overcome. To enhance the recognition capabilities of the system, a statistical language model is integrated into the hidden Markov model framework. To preselect useful language models and compare them, perplexity is used. Both perplexity as originally proposed and normalized perplexity are considered. In our experiments several system configurations with different vocabulary sizes were tested. While the perplexity increases with a growing vocabulary, we observed that the normalized perplexity decreases. This leads to the conclusion that language models become more powerful in recognition tasks with larger vocabulary size. This conclusion could be confirmed in a number of experiments. For a system based on a vocabulary of 412 words a word recognition rate of 78.53% was measured when no language model was engaged. Using a bigram language model, the recognition rate increased to 81.27%. For a 7719 word vocabulary, 40.47% of the words were recognized correctly without a language model, and 60.05% with bigram information.