Balancing Attended and Global Stimuli in Perceived Video Quality Assessment

  • Authors:
  • Junyong You;Jari Korhonen;Andrew Perkis;Touradj Ebrahimi

  • Affiliations:
  • Centre for Quantifiable Quality of Service in Communication Systems, Norwegian University of Science and Technology (NTNU), Trondheim, Norway;Department of Photonics Engineering, Technical University of Denmark, Lyngby, Denmark;Centre for Quantifiable Quality of Service in Communication Systems, Norwegian University of Science and Technology, Trondheim, Norway;Centre for Quantifiable Quality of Service in Communication Systems, Norwegian University of Science and Technology (NTNU), Trondheim, Norway

  • Venue:
  • IEEE Transactions on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The visual attention mechanism plays a key role in the human perception system and it has a significant impact on our assessment of perceived video quality. In spite of receiving less attention from the viewers, unattended stimuli can still contribute to the understanding of the visual content. This paper proposes a quality model based on the late attention selection theory, assuming that the video quality is perceived via two mechanisms: global and local quality assessment. First we model several visual features influencing the visual attention in quality assessment scenarios to derive an attention map using appropriate fusion techniques. The global quality assessment as based on the assumption that viewers allocate their attention equally to the entire visual scene, is modeled by four carefully designed quality features. By employing these same quality features, the local quality model tuned by the attention map considers the degradations on the significantly attended stimuli. To generate the overall video quality score, global and local quality features are combined by a content adaptive linear fusion method and pooled over time, taking the temporal quality variation into consideration. The experimental results have been compared to results from appropriate eye tracking and video quality assessment experiments, demonstrating promising performance.