Hiding message delivery and reducing memory access latency by providing direct-to-cache transfer during receive operations in a message passing environment

  • Authors:
  • Farshad Khunjush;Nikitas J. Dimopoulos

  • Affiliations:
  • University of Victoria, Victoria, B.C., Canada;University of Victoria, Victoria, B.C., Canada

  • Venue:
  • MEDEA '05 Proceedings of the 2005 workshop on MEmory performance: DEaling with Applications , systems and architecture
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The focus of this work is on techniques that promise to reduce the message delivery latency in message passing environments. The main contributors to message delivery latency in message passing environments are the copying operations needed to transfer and bind a received message to the consuming process/thread. To reduce this copying overhead and to reach to finer granularity, we introduced architectural extensions comprising of specialized network cache and instructions to manage the operations of this extension. In this work we study the caching environment. Our simulations show that messages can be bound and transferred into the data cache where they persist long enough to be consumed. We also study the structure of the required network cache and show that a small capacity cache is sufficient.