JellyLens: content-aware adaptive lenses

  • Authors:
  • Cyprien Pindat;Emmanuel Pietriga;Olivier Chapuis;Claude Puech

  • Affiliations:
  • University of Paris-Sud & INRIA, Orsay, France;INRIA & INRIA Chile & University of Paris-Sud, Orsay, France;University of Paris-Sud & INRIA, Orsay, France;INRIA & INRIA Chile & University of Paris-Sud, Orsay, France

  • Venue:
  • Proceedings of the 25th annual ACM symposium on User interface software and technology
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Focus+context lens-based techniques smoothly integrate two levels of detail using spatial distortion to connect the magnified region and the context. Distortion guarantees visual continuity, but causes problems of interpretation and focus targeting, partly due to the fact that most techniques are based on statically-defined, regular lens shapes, that result in far-from-optimal magnification and distortion. JellyLenses dynamically adapt to the shape of the objects of interest, providing detail-in-context visualizations of higher relevance by optimizing what regions fall into the focus, context and spatially-distorted transition regions. This both improves the visibility of content in the focus region and preserves a larger part of the context region. We describe the approach and its implementation, and report on a controlled experiment that evaluates the usability of JellyLenses compared to regular fisheye lenses, showing clear performance improvements with the new technique for a multi-scale visual search task.