Just noticeable difference for 3d images with depth saliency

  • Authors:
  • Rui Zhong;Ruimin Hu;Yi Shi;Zhongyuan Wang;Zhen Han;Lu Liu;Jinhui Hu

  • Affiliations:
  • National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China

  • Venue:
  • PCM'12 Proceedings of the 13th Pacific-Rim conference on Advances in Multimedia Information Processing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The just noticeable difference (JND) threshold of images in essence depends on the inconsistent human visual sensitivity for different stimulus. As the key difference between 2D and 3D visual perception, the depth saliency will adjust the eyes' sensitivity to the image content significantly. This paper carves out a 3D image JND model that integrates depth saliency as the main influence factor to simulate the human vision more accurately. The depth saliency is first calculated by integrating multiple depth perceptual stimuli such as intensity and depth contrast. Then the final JND values are computed on different 3D image areas according to the influence of different depth saliency. The experiment result demonstrates that the proposed model in this paper could tolerant more additional noise in the original image while still keeping the similar subject quality with the corresponding models.