The Convergence of Iterated Classification

  • Authors:
  • Chang An;Henry S. Baird

  • Affiliations:
  • -;-

  • Venue:
  • DAS '08 Proceedings of the 2008 The Eighth IAPR International Workshop on Document Analysis Systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We report an improved methodology for training a sequence of classifiers for document image content extraction, that is, the location and segmentation of regions containing handwriting, machine-printed text, photographs, blank space, etc. The resulting segmentation is pixel-accurate, and so accommodates a wide range of zone shapes (not merely rectangles). We have systematically explored the best scale (spatial extent) of features. We have found that the methodology is sensitive to ground-truthing policy, and especially to precision of ground-truth boundaries. Experiments on a diverse test set of 83 document images show that tighter ground-truth reduces per-pixel classification errorsby 45% (from 38.9% to 21.4%). Strong evidence, from both experiments and simulation, suggests that iterated classification converges region boundaries to the ground-truth (i.e. they don't drift). Experiments show that four-stage iterated classifiers reduce the error rates by 24%. We also present an analysis of special cases suggesting reasons why boundaries converge to the ground-truth.