ACM Transactions on Computer Systems (TOCS)
A status report on research in transparent informed prefetching
ACM SIGOPS Operating Systems Review
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Informed prefetching and caching
SOSP '95 Proceedings of the fifteenth ACM symposium on Operating systems principles
ACM Transactions on Computer Systems (TOCS)
Compiler-based I/O prefetching for out-of-core applications
ACM Transactions on Computer Systems (TOCS)
Design and Implementation of a Predictive File Prefetching Algorithm
Proceedings of the General Track: 2002 USENIX Annual Technical Conference
The Multics Input/Output system
SOSP '71 Proceedings of the third ACM symposium on Operating systems principles
Using Multiple Predictors to Improve the Accuracy of File Access Predictions
MSS '03 Proceedings of the 20 th IEEE/11 th NASA Goddard Conference on Mass Storage Systems and Technologies (MSS'03)
Adaptive Prefetching for Device Independent File I/O
Adaptive Prefetching for Device Independent File I/O
Mining block correlations to improve storage performance
ACM Transactions on Storage (TOS)
The performance impact of kernel prefetching on buffer cache replacement algorithms
SIGMETRICS '05 Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Passive NFS Tracing of Email and Research Workloads
FAST '03 Proceedings of the 2nd USENIX Conference on File and Storage Technologies
A stochastic approach to file access prediction
SNAPI '03 Proceedings of the international workshop on Storage network architecture and parallel I/Os
Caching and prefetching algorithms for programs with looping reference patterns
The Computer Journal
NFS tricks and benchmarking traps
ATEC '03 Proceedings of the annual conference on USENIX Annual Technical Conference
SARC: sequential prefetching in adaptive replacement cache
ATEC '05 Proceedings of the annual conference on USENIX Annual Technical Conference
Managing prefetch memory for data-intensive online servers
FAST'05 Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies - Volume 4
Aggressive prefetching: an idea whose time has come
HOTOS'05 Proceedings of the 10th conference on Hot Topics in Operating Systems - Volume 10
Why does file system prefetching work?
ATEC '99 Proceedings of the annual conference on USENIX Annual Technical Conference
ICDCS '07 Proceedings of the 27th International Conference on Distributed Computing Systems
Competitive prefetching for concurrent sequential I/O
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007
Optimal multistream sequential prefetching in a shared cache
ACM Transactions on Storage (TOS)
Proceedings of the 3rd Annual Haifa Experimental Systems Conference
Improving host swapping using adaptive prefetching and paging notifier
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
FAST'11 Proceedings of the 9th USENIX conference on File and stroage technologies
FAST: quick application launch on solid-state drives
FAST'11 Proceedings of the 9th USENIX conference on File and stroage technologies
Proceedings of the VLDB Endowment
Hi-index | 0.00 |
As Linux runs an increasing variety of workloads, its in-kernel readahead algorithm has been challenged by many unexpected and subtle problems. To name a few: readahead thrashings arise when readahead pages are evicted prematurely under memory pressure; readahead attempts on already cached pages are undesirable; interrupted-then-retried reads and locally disordered NFS reads that can easily fool the sequential detection logic. In this paper, we present a new Linux readahead framework with flexible and robust heuristics that can cover varied sequential I/O patterns. It also enjoys great simplicity by handling most abnormal cases in an implicit way. We demonstrate its advantages by a host of case studies. Network throughput is 3 times better in the case of thrashing and 1.8 times better for large NFS files. On serving large files with lighttpd, the disk utilization is decreased by 26% while providing 17% more network throughput.