Parallel algorithms for generating random permutations on a shared memory machine
SPAA '90 Proceedings of the second annual ACM symposium on Parallel algorithms and architectures
Introduction to parallel algorithms and architectures: array, trees, hypercubes
Introduction to parallel algorithms and architectures: array, trees, hypercubes
The art of computer programming, volume 3: (2nd ed.) sorting and searching
The art of computer programming, volume 3: (2nd ed.) sorting and searching
Journal of the ACM (JACM)
Matrix analysis and applied linear algebra
Matrix analysis and applied linear algebra
Fast Parallel Generation of Random Permutations
ICALP '91 Proceedings of the 18th International Colloquium on Automata, Languages and Programming
Fast Generation of Random Permutations via Networks Simulation
ESA '96 Proceedings of the Fourth Annual European Symposium on Algorithms
Model selection: a bootstrap approach
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 03
Information-theoretic analysis of molecular (co)evolution using graphics processing units
Proceedings of the 3rd international workshop on Emerging computational methods for the life sciences
Hi-index | 0.00 |
Many papers on parallel random permutation algorithms assume the input size n to be a power of two and imply that these algorithms can be easily generalized to arbitrary n. We show that this simplifying assumption is not necessarily correct since it may result in a bias. Many of these algorithms are, however, consistent, i.e., iterating them ultimately converges against an unbiased permutation. We prove this convergence along with proving exponential convergence speed. Furthermore, we present an analysis of iterating applied to a butterfly permutation network, which works in-place and is well-suited for implementation on many-core systems such as GPUs. We also show a method that improves the convergence speed even further and yields a practical implementation of the permutation network on current GPUs.