Acceleration of bulk memory operations in a heterogeneous multicore architecture

  • Authors:
  • JongHyuk Lee;Ziyi Liu;Xiaonan Tian;Dong Hyuk Woo;Weidong Shi;Dainis Boumber;Yonghong Yan;Kyeong-An Kwon

  • Affiliations:
  • University of Houston, Houston, TX, USA;University of Houston, Houston, TX, USA;University of Houston, Houston, TX, USA;Intel Labs, Santa Clara, CA, USA;University of Houston, Houston, TX, USA;University of Houston, Houston, TX, USA;University of Houston, Houston, TX, USA;University of Houston, Houston, TX, USA

  • Venue:
  • Proceedings of the 21st international conference on Parallel architectures and compilation techniques
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a novel approach of using the integrated GPU to accelerate conventional operations that are normally performed by the CPUs, the bulk memory operations, such as memcpy or memset. Offloading the bulk memory operations to the GPU has many advantages, i) the throughput driven GPU outperforms the CPU on the bulk memory operations; ii) for on-die GPU with unified cache between the GPU and the CPU, the GPU private caches can be leveraged by the CPU for storing moved data and reducing the CPU cache bottleneck; iii) with additional lightweight hardware, asynchronous offload can be supported as well; and iv) different from the prior arts using dedicated hardware copy engines (e.g., DMA), our approach leverages the exiting GPU hardware resources as much as possible. The performance results based on our solution showed that offloaded bulk memory operations outperform CPU up to 4.3 times in micro benchmarks while still using less resources. Using eight real world applications and a cycle based full system simulation environment, the results showed 30% speedup for five, more than 20% speedup for two of the eight applications.