Distributed anemone: transparent low-latency access to remote memory

  • Authors:
  • Michael R. Hines;Jian Wang;Kartik Gopalan

  • Affiliations:
  • Computer Science Department, Binghamton University;Computer Science Department, Binghamton University;Computer Science Department, Binghamton University

  • Venue:
  • HiPC'06 Proceedings of the 13th international conference on High Performance Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Performance of large memory applications degrades rapidly once the system hits the physical memory limit and starts paging to local disk. We present the design, implementation and evaluation of Distributed Anemone (Adaptive Network Memory Engine) – a lightweight and distributed system that pools together the collective memory resources of multiple machines across a gigabit Ethernet LAN. Anemone treats remote memory as another level in the memory hierarchy between very fast local memory and very slow local disks. Anemone enables applications to access potentially “unlimited” network memory without any application or operating system modifications. Our kernel-level prototype features fully distributed resource management, low-latency paging, resource discovery, load balancing, soft-state refresh, and support for 'jumbo' Ethernet frames. Anemone achieves low page-fault latencies of 160μs average, application speedups of up to 4 times for single process and up to 14 times for multiple concurrent processes, when compared against disk-based paging.