Using Eager Strategies to Improve NFS I/O Performance

  • Authors:
  • Stephen Rago;Aniruddha Bohra;Cristian Ungureanu

  • Affiliations:
  • -;-;-

  • Venue:
  • NAS '11 Proceedings of the 2011 IEEE Sixth International Conference on Networking, Architecture, and Storage
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Typical NFS clients write in a lazy fashion: they leave dirty pages in the page cache and defer writing to the server until later. This reduces network traffic when applications repeatedly modify the same set of pages. However, this approach can lead to memory pressure, when the number of available pages on the client system is so low that the system must work harder to reclaim dirty pages. System performance is poor under memory pressure. We show examples of this problem and present two mechanisms to solve it: eager write back and eager page laundering. These mechanisms change the client's data management policy from lazy to eager, resulting in higher throughput for sequential writes. In addition, we show that NFS servers suffer from out-of-order file operations, which further reduce performance. We introduce request ordering, a server mechanism to process operations (as much as possible) in the order they were sent by the client, which improves read performance substantially. We have implemented these techniques in the Linux operating system. I/O performance is improved, with the most pronounced improvement visible for sequential access to large files. We see about 33% improvement in the performance of streaming write workloads and more than triple the performance of streaming read workloads. We evaluate several nonsequential workloads and show that these techniques do not degrade performance, and can sometimes improve performance. We also design and evaluate an adversarial workload to show that the eager policies can perform worse in some pathological cases.