Amortized efficiency of list update and paging rules
Communications of the ACM
A theory of productivity in the creative process
IEEE Computer Graphics and Applications
Information Processing Letters
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Text compression
On the necessity of Occam algorithms
STOC '90 Proceedings of the twenty-second annual ACM symposium on Theory of computing
Competitive paging with locality of reference
STOC '91 Proceedings of the twenty-third annual ACM symposium on Theory of computing
Journal of Algorithms
Strongly competitive algorithms for paging with locality of reference
SODA '92 Proceedings of the third annual ACM-SIAM symposium on Discrete algorithms
Software support for speculative loads
ASPLOS V Proceedings of the fifth international conference on Architectural support for programming languages and operating systems
Reducing memory latency via non-blocking and prefetching caches
ASPLOS V Proceedings of the fifth international conference on Architectural support for programming languages and operating systems
Design and evaluation of a compiler algorithm for prefetching
ASPLOS V Proceedings of the fifth international conference on Architectural support for programming languages and operating systems
A status report on research in transparent informed prefetching
ACM SIGOPS Operating Systems Review
Analysis of arithmetic coding for data compression
Information Processing and Management: an International Journal - Special issue on data compression for images and texts
Practical prefetching via data compression
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
Arithmetic coding for data compression
Communications of the ACM
Optimal prediction for prefetching in the worst case
SODA '94 Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms
Information Theory and Reliable Communication
Information Theory and Reliable Communication
Fido: A Cache That Learns to Fetch
VLDB '91 Proceedings of the 17th International Conference on Very Large Data Bases
ON THE COMPUTATIONAL COMPLEXITY OFAPPROXIMATING DISTRIBUTIONS BY PROBABILISTIC AUTOMATA
ON THE COMPUTATIONAL COMPLEXITY OFAPPROXIMATING DISTRIBUTIONS BY PROBABILISTIC AUTOMATA
Empirical investigation of the Markov reference model
Proceedings of the tenth annual ACM-SIAM symposium on Discrete algorithms
Performance modelling of speculative prefetching for compound requests in low bandwidth networks
WOWMOM '00 Proceedings of the 3rd ACM international workshop on Wireless mobile multimedia
Can entropy characterize performance of online algorithms?
SODA '01 Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms
FastSlim: prefetch-sfe trace reduction for I/O cache simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
LeZi-update: an information-theoretic framework for personal mobility tracking in PCS networks
Wireless Networks - Selected Papers from Mobicom'99
Markov model prediction of I/O requests for scientific applications
ICS '02 Proceedings of the 16th international conference on Supercomputing
Evaluating continuous nearest neighbor queries for streaming time series via pre-fetching
Proceedings of the eleventh international conference on Information and knowledge management
Computer Networks: The International Journal of Computer and Telecommunications Networking
Design and Implementation of a Predictive File Prefetching Algorithm
Proceedings of the General Track: 2002 USENIX Annual Technical Conference
Integrated prefetching and caching in single and parallel disk systems
Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures
Web-application centric object prefetching
Journal of Systems and Software
Location prediction algorithms for mobile wireless systems
Wireless internet handbook
Mobility-based anomaly detection in cellular mobile networks
Proceedings of the 3rd ACM workshop on Wireless security
Improving the performance of client web object retrieval
Journal of Systems and Software
STEP: Self-Tuning Energy-safe Predictors
Proceedings of the 6th international conference on Mobile data management
Integrated prefetching and caching in single and parallel disk systems
Information and Computation
Object prefetching using semantic links
ACM SIGMIS Database
Evaluating Next-Cell Predictors with Extensive Wi-Fi Mobility Data
IEEE Transactions on Mobile Computing
A data structure for a sequence of string accesses in external memory
ACM Transactions on Algorithms (TALG)
Entropy-based bounds for online algorithms
ACM Transactions on Algorithms (TALG)
Path and cache conscious prefetching (PCCP)
The VLDB Journal — The International Journal on Very Large Data Bases
Exploring the bounds of web latency reduction from caching and prefetching
USITS'97 Proceedings of the USENIX Symposium on Internet Technologies and Systems on USENIX Symposium on Internet Technologies and Systems
Optimal multistream sequential prefetching in a shared cache
ACM Transactions on Storage (TOS)
ACM Transactions on Storage (TOS)
Algorithms and data structures for external memory
Foundations and Trends® in Theoretical Computer Science
Prefetching with adaptive cache culling for striped disk arrays
ATC'08 USENIX 2008 Annual Technical Conference on Annual Technical Conference
Predicting future locations using prediction-by-partial-match
Proceedings of the first ACM international workshop on Mobile entity localization and tracking in GPS-less environments
Rethinking FTP: Aggressive block reordering for large file transfers
ACM Transactions on Storage (TOS)
On the bit-complexity of Lempel-Ziv compression
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
Prefetching based on web usage mining
Proceedings of the ACM/IFIP/USENIX 2003 International Conference on Middleware
Prediction in wireless networks by Markov Chains
IEEE Wireless Communications
Integrated prefetching and caching in single and parallel disk systems
Information and Computation
Enhancing prediction accuracy in PCM-based file prefetch by constained pattern replacement algorithm
ICCS'03 Proceedings of the 2003 international conference on Computational science
Universal reinforcement learning
IEEE Transactions on Information Theory
Reducing seek overhead with application-directed prefetching
USENIX'09 Proceedings of the 2009 conference on USENIX Annual technical conference
Performance evaluation of LZ-based location prediction algorithms in cellular networks
IEEE Communications Letters
Self-similarity: Behind workload reshaping and prediction
Future Generation Computer Systems
An efficient wireless resource allocation based on a data compressor predictor
ICCS'05 Proceedings of the 5th international conference on Computational Science - Volume Part II
Next place prediction using mobility Markov chains
Proceedings of the First Workshop on Measurement, Privacy, and Mobility
Data structures on event graphs
ESA'12 Proceedings of the 20th Annual European conference on Algorithms
Real-time integrated prefetching and caching
Journal of Scheduling
A cloud-powered driver-less printing system for smartphones
Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing
Hi-index | 0.06 |
Caching and prefetching are important mechanisms for speeding up access time to data on secondary storage. Recent work in competitive online algorithms has uncovered several promising new algorithms for caching. In this paper, we apply a form of the competitive philosophy for the first time to the problem of prefetching to develop an optimal universal prefetcher in terms of fault rate, with particular applications to large-scale databases and hypertext systems. Our prediction algorithms with particular applications to large-scale databases and hypertext systems. Our prediction algorithms for prefetching are novel in that they are based on data compression techniques that are both theoretically optimal and good in practice. Intuitively, in order to compress data effectively, you have to be able to predict future data well, and thus good data compressors should be able to predict well for purposes of prefetching. We show for powerful models such as Markov sources and mthe order Markov sources that the page fault rate incurred by our prefetching algorithms are optimal in the limit for almost all sequences of page requests.