Sparsity-based super-resolution for offline handwriting recognition

  • Authors:
  • Shiv Vitaladevuni;Huaigu Cao;David Belanger;Krishna Subramanian;Rohit Prasad;Prem Natarajan

  • Affiliations:
  • Raytheon BBN Technologies, Cambridge, MA;Raytheon BBN Technologies, Cambridge, MA;Raytheon BBN Technologies, Cambridge, MA;Raytheon BBN Technologies, Cambridge, MA;Raytheon BBN Technologies, Cambridge, MA;Raytheon BBN Technologies, Cambridge, MA

  • Venue:
  • Proceedings of the 2011 Joint Workshop on Multilingual OCR and Analytics for Noisy Unstructured Text Data
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a sparsity-based approach to super-resolution for handwritten document images, and demonstrate that it improves handwriting recognition accuracy. Given high resolution training images, low and high resolution dictionaries are constructed by extracting patches. The low resolution patches are adapted to expected distortions in the out-ofdomain test data using image filters. The intuition is that the low-resolution patches would match with artifacts in the test images and the pristine high-resolution patches would be back-projected to get a high-resolution version of test image. Patches from test images are projected onto the lowresolution dictionary under sparsity constraints. The projections coefficients are used to back-project high-resolution dictionary elements for super-resolution. Our experiments indicate that this super-resolution produces substantial improvements in handwriting recognition over bicubic. An important feature is the use of a separate, out-of-domain high resolution dataset for learning the dictionary and adapting it due to the unavailability of high resolution versions of the test data.