Markov chains, computer proofs, and average-case analysis of best fit bin packing
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Choosing a reliable hypothesis
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Practical prefetching via data compression
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
Analysis of branch prediction via data compression
Proceedings of the seventh international conference on Architectural support for programming languages and operating systems
Minimizing stall time in single and parallel disk systems
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Branch prediction based on universal data compression algorithms
Proceedings of the 25th annual international symposium on Computer architecture
Investigation of a prefetch model for low bandwidth networks
WOWMOM '98 Proceedings of the 1st ACM international workshop on Wireless mobile multimedia
Web prefetching between low-bandwidth clients and proxies: potential and performance
SIGMETRICS '99 Proceedings of the 1999 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
LeZi-update: an information-theoretic approach to track mobile users in PCS networks
MobiCom '99 Proceedings of the 5th annual ACM/IEEE international conference on Mobile computing and networking
Optimal prediction for prefetching in the worst case
SODA '94 Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms
A cost-benefit scheme for high performance predictive prefetching
SC '99 Proceedings of the 1999 ACM/IEEE conference on Supercomputing
Minimizing stall time in single and parallel disk systems
Journal of the ACM (JACM)
Performance Optimization Problem in Speculative Prefetching
IEEE Transactions on Parallel and Distributed Systems
Effect of Speculative Prefetching on Network Load in Distributed Systems
IPDPS '01 Proceedings of the 15th International Parallel & Distributed Processing Symposium
Program Modelling via Inter-Reference Gaps and Applications
MASCOTS '95 Proceedings of the 3rd International Workshop on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems
A Decoupled Architecture for Application-Specific File Prefetching
Proceedings of the FREENIX Track: 2002 USENIX Annual Technical Conference
On-line Decision Making for a Class of Loss Functions via Lempel-Ziv Parsing
DCC '00 Proceedings of the Conference on Data Compression
Using Multiple Predictors to Improve the Accuracy of File Access Predictions
MSS '03 Proceedings of the 20 th IEEE/11 th NASA Goddard Conference on Mass Storage Systems and Technologies (MSS'03)
Fast pattern matching for entropy bounded text
DCC '95 Proceedings of the Conference on Data Compression
Mining block correlations to improve storage performance
ACM Transactions on Storage (TOS)
C-Miner: Mining Block Correlations in Storage Systems
FAST '04 Proceedings of the 3rd USENIX Conference on File and Storage Technologies
Predicting file system actions from prior events
ATEC '96 Proceedings of the 1996 annual conference on USENIX Annual Technical Conference
C-Miner: mining block correlations in storage systems
FAST'04 Proceedings of the 3rd USENIX conference on File and storage technologies
Hi-index | 0.00 |
A form of the competitive philosophy is applied to the problem of prefetching to develop an optimal universal prefetcher in terms of fault ratio, with particular applications to large-scale databases and hypertext systems. The algorithms are novel in that they are based on data compression techniques that are both theoretically optimal and good in practice. Intuitively, in order to compress data effectively, one has to be able to predict feature data well, and thus good data compressors should be able to predict well for purposes of prefetching. It is shown for powerful models such as Markov sources and mth order Markov sources that the page fault rates incurred by the prefetching algorithms presented are optimal in the limit for almost all sequences of page accesses.