Making data structures persistent
Journal of Computer and System Sciences - 18th Annual ACM Symposium on Theory of Computing (STOC), May 28-30, 1986
Journal of Algorithms
An introduction to randomized algorithms
Discrete Applied Mathematics - Special volume: combinatorics and theoretical computer science
Shallow binding makes functional arrays fast
ACM SIGPLAN Notices
An optimal on-line algorithm for metrical task system
Journal of the ACM (JACM)
A randomized implementation of multiple functional arrays
LFP '94 Proceedings of the 1994 ACM conference on LISP and functional programming
Space-efficient closure representations
LFP '94 Proceedings of the 1994 ACM conference on LISP and functional programming
Real-time deques, multihead Turing machines, and purely functional programming
FPCA '93 Proceedings of the conference on Functional programming languages and computer architecture
Random walks on weighted graphs and applications to on-line algorithms
Journal of the ACM (JACM)
LFP '94 Proceedings of the 1994 ACM conference on LISP and functional programming
Memory versus randomization in on-line algorithms
IBM Journal of Research and Development
Purely functional random-access lists
FPCA '95 Proceedings of the seventh international conference on Functional programming languages and computer architecture
Confluently persistent deques via data structuaral bootstrapping
SODA '93 Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms
An Automatic Technique for Selection of Data Representations in SETL Programs
ACM Transactions on Programming Languages and Systems (TOPLAS)
Communications of the ACM
Efficient applicative data types
POPL '84 Proceedings of the 11th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
Persistent data structures
Hi-index | 0.01 |
The designs and implementations of efficient aggregate data structures have been important issues in functional programming. It is not clear how to select a good representation for an aggregate when access patterns to the aggregate are highly variant, or even unpredictable. Previous approaches rely on compile--time analyses or programmer annotations. These methods can be unreliable because they try to predict a program's behavior before it is executed.We propose a probabilistic approach, which is based on Markov processes, for automatic selection of data representations. The selection is modeled as a random process moving in a graph with weighted edges. The proposed approach employs coin tossing at run--time to help choosing a suitable data representation. The transition probability function used by the coin tossing is constructed in a simple and common way from a measured cost function. We show that, under this setting, random selections of data representations can be quite effective. The probabilistic approach is used to implement bag aggregates, and the performance results are compared to those of deterministic selection strategies.