Disconnected operation in the Coda file system
SOSP '91 Proceedings of the thirteenth ACM symposium on Operating systems principles
Practical prefetching via data compression
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
The weighted majority algorithm
Information and Computation
Optimal prefetching via data compression
Journal of the ACM (JACM)
Automatic compiler-inserted I/O prefetching for out-of-core applications
OSDI '96 Proceedings of the second USENIX symposium on Operating systems design and implementation
Energy-aware adaptation for mobile applications
Proceedings of the seventeenth ACM symposium on Operating systems principles
Modeling Power Management for Hard Disks
MASCOTS '94 Proceedings of the Second International Workshop on Modeling, Analysis, and Simulation On Computer and Telecommunication Systems
Design and Implementation of a Predictive File Prefetching Algorithm
Proceedings of the General Track: 2002 USENIX Annual Technical Conference
The Case for Efficient File Access Pattern Modeling
HOTOS '99 Proceedings of the The Seventh Workshop on Hot Topics in Operating Systems
Using Multiple Predictors to Improve the Accuracy of File Access Predictions
MSS '03 Proceedings of the 20 th IEEE/11 th NASA Goddard Conference on Mass Storage Systems and Technologies (MSS'03)
Association Rules for Supporting Hoarding in Mobile Computing Environments
RIDE '00 Proceedings of the 10th International Workshop on Research Issues in Data Engineering
Group-Based Management of Distributed File Caches
ICDCS '02 Proceedings of the 22 nd International Conference on Distributed Computing Systems (ICDCS'02)
Predictive data grouping using successor prediction
Predictive data grouping using successor prediction
GD-GhOST: a goal-oriented self-tuning caching algorithm
Proceedings of the 2004 ACM symposium on Applied computing
Predicting When Not to Predict
MASCOTS '04 Proceedings of the The IEEE Computer Society's 12th Annual International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems
Predictive Reduction of Power and Latency (PuRPLe)
MSST '05 Proceedings of the 22nd IEEE / 13th NASA Goddard Conference on Mass Storage Systems and Technologies
Cooperative I/O: a novel I/O semantics for energy-aware applications
OSDI '02 Proceedings of the 5th symposium on Operating systems design and implementationCopyright restrictions prevent ACM from being able to make the PDFs for this conference available for downloading
Modeling Hard-Disk Power Consumption
FAST '03 Proceedings of the 2nd USENIX Conference on File and Storage Technologies
Energy efficient prefetching and caching
ATEC '04 Proceedings of the annual conference on USENIX Annual Technical Conference
USITS'99 Proceedings of the 2nd conference on USENIX Symposium on Internet Technologies and Systems - Volume 2
File access prediction with adjustable accuracy
PCC '02 Proceedings of the Performance, Computing, and Communications Conference, 2002. on 21st IEEE International
An analytical approach to file prefetching
ATEC '97 Proceedings of the annual conference on USENIX Annual Technical Conference
Secure-CITI Critical Information-Technology Infrastructure
dg.o '06 Proceedings of the 2006 international conference on Digital government research
ACM Transactions on Storage (TOS)
Context-aware mechanisms for reducing interactive delays of energy management in disks
ATC'08 USENIX 2008 Annual Technical Conference on Annual Technical Conference
Sustainable predictive storage management: on-line grouping for energy and latency reduction
Proceedings of the 4th Annual International Conference on Systems and Storage
Towards energy-efficient database cluster design
Proceedings of the VLDB Endowment
Hi-index | 0.00 |
Data access prediction has been proposed as a mechanism to overcome latency lag, and more recently as a means of conserving energy in mobile systems. We present a fully adaptive predictor, that can optimize itself for any arbitrary workload, while simultaneously offering simple adjustment of goals between energy conservation and latency reduction. Our algorithm. STEP, achieves power savings on mobile computers by eliminating more data fetches, which would otherwise have caused excess energy to be consumed in accessing local storage devices or using the wireless interface to fetch remote data. We have demonstrated our algorithm to perform as well as some of the best access predictors, while incurring almost none of the associated increase in I/O workloads typical of their use. Our algorithm reduced average response times by approximately 50% compared to an LRU cache, while requiring less than half the I/O operations that traditional predictors would require to achieve the same performance, thereby incurring no energy penalty.