Improved estimation for just-noticeable visual distortion
Signal Processing
The big picture on small screens delivering acceptable video quality in mobile TV
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Spatio-temporal just noticeable distortion profile for grey scale image/video in DCT domain
IEEE Transactions on Circuits and Systems for Video Technology
Perceptually-friendly H.264/AVC video coding
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Towards viewing quality optimized video adaptation
ICME '11 Proceedings of the 2011 IEEE International Conference on Multimedia and Expo
Visual sensitivity guided bit allocation for video coding
IEEE Transactions on Multimedia
IEEE Transactions on Consumer Electronics
Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile
IEEE Transactions on Circuits and Systems for Video Technology
Estimating Just-Noticeable Distortion for Video
IEEE Transactions on Circuits and Systems for Video Technology
Proceedings of the 4th ACM Multimedia Systems Conference
Hi-index | 0.00 |
Design of a quality-of-experience (QoE) optimized mobile video system should consider not only the video content and display specifications but also the fact that mobile devices are exposed to many different environments and viewing scenarios. For same device and same content, the viewer will perceive different visual qualities when the viewing environment changes. Current perceptual quality estimation approaches including the extensively adopted just noticeable distortion (JND) based models neglect significant influence of surroundings on perception. However, the environmental effects on perception have long been supported by psychophysical experiments. This paper proposes a novel viewing scenario adapted model that exploits the influence of various viewing conditions including display size, viewing distance, ambient luminance and body movement and apply the proposed model to the H.264 video encoding. With the help of multiple sensors widely equipped on handholds today, the mobile device is able to dynamically estimate the surrounding conditions. The estimated environment parameters are feedback to video encoder to generate encoded video source that best matches to the current scenario so as to improve the bandwidth efficiency and enhance visual quality for that particular environment. Our subjective experiments demonstrate a significant 30% saving on bit-rates without perceivable quality loss, or obvious improvements in visual qualities under same bandwidth constraint.