On the Dependence of Handwritten Word Recognizers on Lexicons
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognition of Cursive Roman Handwriting - Past, Present and Future
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 1
Using a hidden Markov model to transcribe handwritten bushman texts
Proceedings of the 11th annual international ACM/IEEE joint conference on Digital libraries
Creating a handwriting recognition corpus for Bushman languages
ICADL'11 Proceedings of the 13th international conference on Asia-pacific digital libraries: for cultural heritage, knowledge dissemination, and future creation
A comparison of machine learning techniques for handwritten |Xam word recognition
Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference
Hi-index | 0.00 |
Abstract: In this paper we present a system for unconstrained hand-written text recognition. The system consists of three components: preprocessing, feature extraction and recognition. In the preprocessing phase, a page of handwritten text is divided into its lines and the writing is normalized by means of skew and slant correction, positioning and scaling. From a normalized text line image, features are extracted using a sliding window technique. From each position of the window nine geometrical features are computed. The core of the system, the recognizer, is based on hidden Markov models. For each individual character, a model is provided. The character models are concatenated to words using a vocabulary. Moreover, the word models are concatenated to models that represent full lines of text. Thus the difficult problem of segmenting a line of text into its individual words can be overcome. To enhance the recognition capabilities of the system, a statistical language model is integrated into the hidden Markov model framework. To preselect useful language models and compare them, perplexity is used. Both perplexity as originally proposed and normalized perplexity are considered. In our experiments several system configurations with different vocabulary sizes were tested. While the perplexity increases with a growing vocabulary, we observed that the normalized perplexity decreases. This leads to the conclusion that language models become more powerful in recognition tasks with larger vocabulary size. This conclusion could be confirmed in a number of experiments. For a system based on a vocabulary of 412 words a word recognition rate of 78.53% was measured when no language model was engaged. Using a bigram language model, the recognition rate increased to 81.27%. For a 7719 word vocabulary, 40.47% of the words were recognized correctly without a language model, and 60.05% with bigram information.