Toward automatic generation of image-text document surrogates to optimize cognition

  • Authors:
  • Eunyee Koh;Andruid Kerne;Jon Moeller

  • Affiliations:
  • Adobe Systems Inc, San Jose, CA, USA;Texas A&M University, College Station, TX, USA;Texas A&M University, College Station, TX, USA

  • Venue:
  • Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The representation of information collections needs to be optimized for human cognition. Growing information collections play a crucial role in human experiences. While documents often include rich visual components, collections, including personal collections and those generated by search engines, are typically represented lists of text-only surrogates. By concurrently invoking complementary components of human cognition, combined image-text surrogates help people to more effectively see, understand, think about, and remember information collection. This research develops algorithmic methods that use the structural context of images in HTML documents to associate meaningful text and thus derive combined image-text surrogates.