Sequence-kernel based sparse representation for amateur video summarization

  • Authors:
  • Zheshen Wang;Mrityunjay Kumar;Jiebo Luo;Baoxin Li

  • Affiliations:
  • Arizona State University, Tempe, AZ, USA;Eastman Kodak Company, Rochester, NY, USA;Eastman Kodak Company, Rochester, NY, USA;Arizona State University, Tempe, AZ, USA

  • Venue:
  • J-MRE '11 Proceedings of the 2011 joint ACM workshop on Modeling and representing events
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic video summarization is critical for facilitating fast browsing and efficient management of multimedia data. Compared to well-edited videos with predefined structures (e.g., movies) or constrained contents (e.g., news or sports videos), upon which existing methods focus, the main challenges of summarizing unconstrained amateur or consumer videos include dealing with extremely diverse contents without any pre-imposed structure and typically mediocre video quality. To address these challenges, we explore a signal-reconstruction-based approach relying only on visual content. In particular, we propose a sequence-kernel-based sparse representation approach for directly summarizing consumer videos. A dictionary of subsequences is first constructed from clustered frames with importance ranking scores of extracted high-level semantics. Video summarization is formulated to seek an optimal combination of the dictionary elements that robustly represents the original video. Weighted-sequence distance is exploited to compute the approximation error, and the kernel-based feature-sign algorithm is used to estimate the sparse coefficients. A linear combination over the dictionary with the obtained optimal sparse coefficients is output as the final summary video. Extensive experiments are performed on 18 videos with subjective ratings from 7 evaluators. Results obtained by the proposed approach compare favorably with two existing methods both visually and quantitatively, validating its effectiveness.