MediaDiver: viewing and annotating multi-view video

  • Authors:
  • Gregor Miller;Sidney Fels;Abir Al Hajri;Michael Ilich;Zoltan Foley-Fisher;Manuel Fernandez;Daesik Jang

  • Affiliations:
  • University of British Columbia, Vancouver, BC, Canada;University of British Columbia, Vancouver, BC, Canada;University of British Columbia, Vancouver, BC, Canada;University of British Columbia, Vancouver, BC, Canada;University of British Columbia, Vancouver, BC, Canada;University of British Columbia, Vancouver, BC, Canada;Kunsan National University, Gunsan, Korea, Republic of

  • Venue:
  • CHI '11 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.