Measurements of a distributed file system
SOSP '91 Proceedings of the thirteenth ACM symposium on Operating systems principles
Disconnected operation in the Coda File System
ACM Transactions on Computer Systems (TOCS)
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Informed prefetching and caching
SOSP '95 Proceedings of the fifteenth ACM symposium on Operating systems principles
A trace-driven analysis of the UNIX 4.2 BSD file system
Proceedings of the tenth ACM symposium on Operating systems principles
Computer architecture: a quantitative approach
Computer architecture: a quantitative approach
Mobile computing: where's the tofu?
ACM SIGMOBILE Mobile Computing and Communications Review
DRPM: dynamic speed control for power management in server class disks
Proceedings of the 30th annual international symposium on Computer architecture
Energy efficient prefetching and caching
ATEC '04 Proceedings of the annual conference on USENIX Annual Technical Conference
Managing prefetch memory for data-intensive online servers
FAST'05 Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies - Volume 4
Energy-efficiency and storage flexibility in the blue file system
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Managing prefetch memory for data-intensive online servers
FAST'05 Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies - Volume 4
Competitive prefetching for concurrent sequential I/O
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007
DiskSeen: exploiting disk layout and access history to enhance I/O prefetch
ATC'07 2007 USENIX Annual Technical Conference on Proceedings of the USENIX Annual Technical Conference
On the design of a new Linux readahead framework
ACM SIGOPS Operating Systems Review - Research and developments in the Linux kernel
Hiding I/O latency with pre-execution prefetching for parallel applications
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Parallel I/O prefetching using MPI file caching and I/O signatures
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
BORG: block-reORGanization for self-optimizing storage systems
FAST '09 Proccedings of the 7th conference on File and storage technologies
Efficient application placement in a dynamic hosting platform
Proceedings of the 18th international conference on World wide web
Energy efficient management scheme for heterogeneous secondary storage system in mobile computers
Proceedings of the 2010 ACM Symposium on Applied Computing
FAST'11 Proceedings of the 9th USENIX conference on File and stroage technologies
Exploring gate-limited analytical models for high-performance network storage servers
Journal of Computer and System Sciences
A hybrid intelligent system to improve predictive accuracy for cache prefetching
Expert Systems with Applications: An International Journal
Developing an optimized application hosting framework in Clouds
Journal of Computer and System Sciences
A Prefetching Scheme Exploiting both Data Layout and Access History on Disk
ACM Transactions on Storage (TOS)
Network-aware data caching and prefetching for cloud-hosted metadata retrieval
NDM '13 Proceedings of the Third International Workshop on Network-Aware Data Management
Hi-index | 0.00 |
I/O prefetching serves to hide the latency of slow peripheral devices. Traditional OS-level prefetching strategies have tended to be conservative, fetching only those data that are very likely to be needed according to some simple heuristic, and only just in time for them to arrive before the first access. More aggressive policies, which might speculate more about which data to fetch, or fetch them earlier in time, have typically not been considered a prudent use of computational, memory, or bandwidth resources. We argue, however, that technological trends and emerging system design goals have dramatically reduced the potential costs and dramatically increased the potential benefits of highly aggressive prefetching policies. We propose that memory management be redesigned to embrace such policies.