Improving disk bandwidth-bound applications through main memory compression

  • Authors:
  • Vicenç Beltran;Jordi Torres;Eduard Ayguadé

  • Affiliations:
  • Barcelona Supercomputing Center, Barcelona, Spain;Technical University of Catalunya, Barcelona, Spain;Technical University of Catalunya, Barcelona, Spain

  • Venue:
  • MEDEA '07 Proceedings of the 2007 workshop on MEmory performance: DEaling with Applications, systems and architecture
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The objective of main memory compression techniques is to reduce the in-memory data size to virtually enlarge the available memory on the system. The main benefit of this technique is the reduction of slow disk I/O operations, thus improving data access latency and saving disk I/O bandwidth. On the other hand, its main drawback is the large amount of CPU power needed by the computationally expensive compression algorithms, that make it unsuitable for medium to large CPU intensive applications. With the proliferation of multicore processors and multi-processor systems, the amount of available CPU power is growing at a fast rate. In this scenario, the number of applications that can transparently benefit from main memory compression can be expanded. Now, not only, single threaded applications, bounded by disk latencies, but also multithreaded ones, bounded by the disk bandwidth can benefit from main memory compression techniques. In this paper we implement and evaluate in the Linux OS a full SMP capable main memory compression subsystem that takes advantage of a current multicore and multiprocessor system to increase the performance of bandwidth sensitive applications like the SPECweb2005 benchmark with promising results.