Implementing global memory management in a workstation cluster
SOSP '95 Proceedings of the fifteenth ACM symposium on Operating systems principles
Availability and utility of idle memory in workstation clusters
SIGMETRICS '99 Proceedings of the 1999 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
The Network RamDisk: Using remote memory on heterogeneous NOWs
Cluster Computing
The Architecture of the Dalí Main-Memory Storage Manager
Multimedia Tools and Applications
A Case for NOW (Networks of Workstations)
IEEE Micro
Cashmere-VLM: Remote Memory Paging for Software Distributed Shared Memory
IPPS '99/SPDP '99 Proceedings of the 13th International Symposium on Parallel Processing and the 10th Symposium on Parallel and Distributed Processing
My Cache or Yours? Making Storage More Exclusive
ATEC '02 Proceedings of the General Track of the annual conference on USENIX Annual Technical Conference
Dodo: A User-Level System for Exploiting Idle Memory in Workstation Clusters
HPDC '99 Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing
Implementation of a reliable remote memory pager
ATEC '96 Proceedings of the 1996 annual conference on USENIX Annual Technical Conference
IEEE Transactions on Computers
MemX: supporting large memory workloads in Xen virtual machines
VTDC '07 Proceedings of the 2nd international workshop on Virtualization technology in distributed computing
Future Generation Computer Systems
Distributed memory virtualization with the use of SDDSfL
PPAM'11 Proceedings of the 9th international conference on Parallel Processing and Applied Mathematics - Volume Part II
On reducing energy management delays in disks
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
Performance of large memory applications degrades rapidly once the system hits the physical memory limit and starts paging to local disk. We present the design, implementation and evaluation of Distributed Anemone (Adaptive Network Memory Engine) – a lightweight and distributed system that pools together the collective memory resources of multiple machines across a gigabit Ethernet LAN. Anemone treats remote memory as another level in the memory hierarchy between very fast local memory and very slow local disks. Anemone enables applications to access potentially “unlimited” network memory without any application or operating system modifications. Our kernel-level prototype features fully distributed resource management, low-latency paging, resource discovery, load balancing, soft-state refresh, and support for 'jumbo' Ethernet frames. Anemone achieves low page-fault latencies of 160μs average, application speedups of up to 4 times for single process and up to 14 times for multiple concurrent processes, when compared against disk-based paging.