FOCUS: clustering crowdsourced videos by line-of-sight

  • Authors:
  • Puneet Jain;Justin Manweiler;Arup Achary;Kirk Beaty

  • Affiliations:
  • Duke University;IBM T. J. Watson;IBM T. J. Watson;IBM T. J. Watson

  • Venue:
  • Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a demonstration of FOCUS [1], a system to appear in the SenSys 2013 main conference. FOCUS is a video-clustering service for live user video streams, indexed automatically and in realtime by shared content. FOCUS uniquely leverages visual, 3D model reconstruction and multimodal sensing to decipher and continuously track a video's line-of-sight. Through spatial reasoning on the relative geometry of multiple video streams, FOCUS recognizes shared content even when viewed from diverse angles and distances. We believe FOCUS can enable a new family of applications, such as instant replay, augmented reality, citizen journalism, security breach detection, and disaster assessment. In the demonstration, we will show 325 video clips taken at Duke University Wallace Wade Stadium being processed in real-time via FOCUS pipeline. The recorded video clips contain one of three spots in the stadium: East Stand, Scoreboard, and West Stand. The demo shall be shown in the form of a web interface, first showing randomly clustered video clips. Later, on a button click, FOCUS shall process displayed videos in real-time, outputting clusters of videos, containing the common shared subject in each of them. For each successfully processed video clip in a cluster, we will further show similar clips from near, medium, and wide angle. To display performance and accuracy of FOCUS in indoor environments, a similar demonstration will be shown for an office space. FOCUS shall run on multi-node Hadoop cluster built on top of IBM SmartCloud platform.