Fast allocation and deallocation of memory based on object lifetimes
Software—Practice & Experience
Region-based memory management
Information and Computation
Memory management with explicit regions
PLDI '98 Proceedings of the ACM SIGPLAN 1998 conference on Programming language design and implementation
Hoard: a scalable memory allocator for multithreaded applications
ASPLOS IX Proceedings of the ninth international conference on Architectural support for programming languages and operating systems
Region-based memory management in cyclone
PLDI '02 Proceedings of the ACM SIGPLAN 2002 Conference on Programming language design and implementation
Scalable locality-conscious multithreaded memory allocation
Proceedings of the 5th international symposium on Memory management
A type and effect system for deterministic parallel Java
Proceedings of the 24th ACM SIGPLAN conference on Object oriented programming systems languages and applications
Race-free and memory-safe multithreading: design and implementation in cyclone
Proceedings of the 5th ACM SIGPLAN workshop on Types in language design and implementation
Distributed transactional memory for metric-space networks
DISC'05 Proceedings of the 19th international conference on Distributed Computing
The myrmics memory allocator: hierarchical,message-passing allocation for global address spaces
Proceedings of the 2012 international symposium on Memory Management
Legion: expressing locality and independence with logical regions
SC '12 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Hi-index | 0.00 |
We present DRASync, a region-based allocator that implements a global address space abstraction for MPI programs with pointer-based data structures. The main features of DRASync are: (a) it amortizes communication among nodes to allow efficient parallel allocation in a global address space; (b) it takes advantage of bulk deallocation and good locality with pointer-based data structures. (c) it supports ownership semantics of regions by nodes akin to reader-writer locks, which makes for a high-level, intuitive synchronization tool in MPI programs, without sacrificing message-passing performance. We evaluate DRASync against a state-of-the-art distributed allocator and find that it produces comparable performance while offering a higher level abstraction to programmers.