Leave-One-Out-Training and Leave-One-Out-Testing Hidden Markov Models for a Handwritten Numeral Recognizer: The Implications of a Single Classifier and Multiple Classifications

  • Authors:
  • Albert Hung-Ren Ko;Paulo Cavalin;Robert Sabourin;Alceu de Souza Britto, Jr.

  • Affiliations:
  • University of Toronto, Toronto;Génie de la Production Automatisée (GPA), Montréal;École de Technologie Supérieure, Montréal;Informática Aplicada (PPGIa-PUCPR), Curitiba

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.15

Visualization

Abstract

Hidden Markov Models (HMMs) have been shown to be useful in handwritten pattern recognition. However, owing to their fundamental structure, they have little resistance to unexpected noise among observation sequences. In other words, unexpected noise in a sequence might “ break” the normal transmission of states for this sequence, making it unrecognizable to trained models. To resolve this problem, we propose a leave-one-out-training strategy, which will make the models more robust. We also propose a leave-one-out-testing method, which will compensate for some of the negative effects of this noise. The latter is actually an example of a system with a single classifier and multiple classifications. Compared with the 98.00 percent accuracy of the benchmark HMMs, the new system achieves a 98.88 percent accuracy rate on handwritten digits.