Serverless network file systems
ACM Transactions on Computer Systems (TOCS) - Special issue on operating system principles
Requirements of I/O systems for parallel machines: an application-driven study
Requirements of I/O systems for parallel machines: an application-driven study
Implementing cooperative prefetching and caching in a globally-managed memory system
SIGMETRICS '98/PERFORMANCE '98 Proceedings of the 1998 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
ISCA '90 Proceedings of the 17th annual international symposium on Computer Architecture
A low-bandwidth network file system
SOSP '01 Proceedings of the eighteenth ACM symposium on Operating systems principles
Experiences with VI communication for database storage
ISCA '02 Proceedings of the 29th annual international symposium on Computer architecture
My Cache or Yours? Making Storage More Exclusive
ATEC '02 Proceedings of the General Track of the annual conference on USENIX Annual Technical Conference
The Multi-Queue Replacement Algorithm for Second Level Buffer Caches
Proceedings of the General Track: 2002 USENIX Annual Technical Conference
Simulation study of cached RAID5 designs
HPCA '95 Proceedings of the 1st IEEE Symposium on High-Performance Computer Architecture
A Log-Based Write-Back Mechanism for Cooperative Caching
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Cooperative Caching Middleware for Cluster-Based Servers
HPDC '01 Proceedings of the 10th IEEE International Symposium on High Performance Distributed Computing
PVFS: a parallel file system for linux clusters
ALS'00 Proceedings of the 4th annual Linux Showcase & Conference - Volume 4
On multi-level exclusive caching: offline optimality and why promotions are better than demotions
FAST'08 Proceedings of the 6th USENIX Conference on File and Storage Technologies
Hi-index | 0.00 |
Multi-level buffer cache architecture has been widely deployed in today's multiple-tier computing environments. However, caches in different levels are inclusive. To make better use of these caches and to achieve the expected performance commensurate to the aggregate cache size, exclusive caching has been proposed. Demotion-based exclusive caching [1] introduces a DEMOTE operation to transfer blocks discarded by a upper level cache to a lower level cache. In this paper, we propose a DEMOTE buffering mechanism over storage networks to reduce the visible costs of DEMOTE operations and provide more flexibility for optimizations. We evaluate the performance of DEMOTE buffering using simulations across both synthetic and real-life workloads on three different networks and protocol layers (TCP/IP on Fast Ethernet, IBNice on InfiniBand, and VAPI on InfiniBand). Our results show that DEMOTE buffering can effectively hide demotion costs. A maximum speedup of 1.4x over the original DEMOTE approach is achieved for some workloads. Speedups in the range of 1.08--1.15x are achieved for two real-life workloads. The vast performance gains results from overlapping demotions and other activities, reduced communication operations and high utilization of the network bandwidth.