Scalable parallel coset enumeration using bulk definition

  • Authors:
  • Gene Cooperman;Victor Grinberg

  • Affiliations:
  • Northeastern Univ., Boston, MA;Northeastern Univ., Boston, MA

  • Venue:
  • Proceedings of the 2001 international symposium on Symbolic and algebraic computation
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Several researchers have worked on parallel coset enumeration strategies using shared memory. This is important not only for speed, but also because the large memory requirements of coset enumeration often make memory the dominant cost, and this cost can be reduced by using many CPU's to reduce the time during which this memory must be “rented”. We take as our testbed an enumeration of Lyons's group (approximately 8.87 million cosets). Lyons's group is one of the largest enumerations carried out in the literature. Previous enumerations of this group in the literature do not appear to scale well as the number of processors increase, with speedups such as a factor of 2 using 4 processors and a speedup of 4 using 16 processors. By using what we call bulk definition of cosets, we achieve nearly linear speedup of the parallel portion of our program. This result depends on two new heuristics for bulk coset definition, clouds and prescan, and a theorem showing that when using parallelized bulk coset definition, the enumeration, including the order in which cosets are defined, is independent of the number of processors used for parallelization. A total computation of 9.4 hours = 473 min. (clouds phase) + 41 min. (prescan phase) is reduced using 32 processors to 23 min. (clouds, par.) + 50 min. (clouds, seq.) + 41 min. (prescan). Parallel timings are presented for the clouds phase, while the prescan phase will be parallelized at a later date. The parallelization of our coset enumeration software was achieved using TOP-C. Some of these ideas may also be useful in parallelizations of related algorithms, such as Gröbner bases and Knuth-Bendix.