CAME: cloud-assisted motion estimation for mobile video compression and transmission

  • Authors:
  • Yuan Zhao;Lei Zhang;Xiaoqiang Ma;Jiangchuan Liu;Hongbo Jiang

  • Affiliations:
  • Simon Fraser University, Burnaby, BC, Canada;Simon Fraser University, Burnaby, BC, Canada;Simon Fraser University, Burnaby, BC, Canada;Simon Fraser University, Burnaby, BC, Canada;Huazhong University of Science and Technology, Wuhan, China

  • Venue:
  • Proceedings of the 22nd international workshop on Network and Operating System Support for Digital Audio and Video
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Video streaming has become one of the most popular networked applications and, with the increased bandwidth and computation power of mobile devices, anywhere and anytime streaming has become a reality. Unfortunately, it remains a challenging task to compress high-quality video in real-time in such devices given the excessive computation and energy demands of compression. On the other hand, transmitting the raw video is simply unaffordable from both energy and bandwidth perspective. In this paper, we propose CAME, a novel cloud-assisted video compression method for mobile devices. CAME leverages the abundant cloud server resources for motion estimation, which is known to be the most computation-intensive step in video compression, accounting for over 90% of the computation time. With CAME, a mobile device selects and uploads only the key information of each picture frame to cloud servers for mesh-based motion estimation, eliminating most of the local computation operations. We develop smart algorithms to identify the key mesh nodes, resulting in minimum distortion and data volume for uploading. Our simulation results demonstrate that CAME saves almost 30% energy for video compression and transmission.