View Interpolation for Medical Images on Autostereoscopic Displays

  • Authors:
  • Svitlana Zinger;Daniel Ruijters;Luat Do;Peter H. N. de With

  • Affiliations:
  • Eindhoven University of Technology, Eindhoven, The Netherlands;Interventional X-Ray Innovation Department, Philips Healthcare, Best, The Netherlands;Eindhoven University of Technology, Eindhoven, The Netherlands;Eindhoven University of Technology, Eindhoven, The Netherlands

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an approach for efficient rendering and transmitting views to a high-resolution autostereoscopic display for medical purposes. Displaying biomedical images on an autostereoscopic display poses different requirements than in a consumer case. For medical usage, it is essential that the perceived image represents the actual clinical data and offers sufficiently high quality for diagnosis or understanding. Autostereoscopic display of multiple views introduces two hurdles: transmission of multi-view data through a bandwidth-limited channel and the computation time of the volume rendering algorithm. We address both issues by generating and transmitting limited set of views enhanced with a depth signal per view. We propose an efficient view interpolation and rendering algorithm at the receiver side based on texture+depth data representation, which can operate with a limited amount of views. We study the main artifacts that occur during rendering—occlusions, and we quantify them first for a synthetic model and then for real-world biomedical data. The experimental results allow us to quantify the peak signal-to-noise ratio for rendered texture and depth as well as the amount of disoccluded pixels as a function of the angle between surrounding cameras.