Distributed visual processing for augmented reality

  • Authors:
  • Winston Yii; Wai Ho Li;Tom Drummond

  • Affiliations:
  • Monash University, Australia;Monash University, Australia;Monash University, Australia

  • Venue:
  • ISMAR '12 Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent advances have made augmented reality on smartphones possible but these applications are still constrained by the limited computational power available. This paper presents a system which combines smartphones with networked infrastructure and fixed sensors and shows how these elements can be combined to deliver real-time augmented reality. A key feature of this framework is the asymmetric nature of the distributed computing environment. Smartphones have high bandwidth video cameras but limited computational ability. Our system connects multiple smartphones through relatively low bandwidth network links to a server with large computational resources connected to fixed sensors that observe the environment. By contrast to other systems that use preprocessed static models or markers, our system has the ability to rapidly build dynamic models of the environment on the fly at frame rate. We achieve this by processing data from a Microsoft Kinect, to build a trackable point cloud model of each frame. The smartphones process their video camera data on-board to extract their own set of compact and efficient feature descriptors which are sent via WiFi to a server. The server runs computationally intensive algorithms including feature matching, pose estimation and occlusion testing for each smartphone. Our system demonstrates real-time performance for two smartphones.