Using data compression in an MPSoC architecture for improving performance

  • Authors:
  • O. Ozturk;M. Kandemir;M. J. Irwin

  • Affiliations:
  • The Pennsylvania State University, University Park, PA;The Pennsylvania State University, University Park, PA;The Pennsylvania State University, University Park, PA

  • Venue:
  • GLSVLSI '05 Proceedings of the 15th ACM Great Lakes symposium on VLSI
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multiprocessor-System-on-a-Chip (MPSoC) performance and power consumption are greatly affected by the application data access characteristics. While the way the application is written is critical in shaping the data access pattern, the compiler optimizations employed can also make a significant difference. Considering that cost of off-chip memory accesses is continuously rising (in terms of CPU cycles), minimizing the number and volume of off-chip data traffic in MPSoCs can be very important. This paper addresses this problem by proposing data compression for increasing the effective on-chip storage space in an MPSoC-based environment. A critical issue is to schedule compressions and decompressions intelligently such that they do not conflict with application execution. In particular, one needs to decide which processors should participate in the compression (and decompression) activity at any given point during the course of execution. We propose both "static" and "dynamic" algorithms for this purpose. In the static scheme, the processors are divided into two groups (those performing compression/decompression and those executing the application), and this grouping is maintained throughout the execution of the application. In the dynamic scheme, on the other hand, the execution starts with some grouping but this grouping can change during the course of execution, depending on the dynamic variations in the data access pattern.