Smalltalk-80: the language and its implementation
Smalltalk-80: the language and its implementation
Computer
The design and evaluation of a high performance Smalltalk system
The design and evaluation of a high performance Smalltalk system
Mostly parallel garbage collection
PLDI '91 Proceedings of the ACM SIGPLAN 1991 conference on Programming language design and implementation
Memory subsystem performance of programs using copying garbage collection
POPL '94 Proceedings of the 21st ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Garbage collection: algorithms for automatic dynamic memory management
Garbage collection: algorithms for automatic dynamic memory management
Informing memory operations: memory performance feedback mechanisms and their applications
ACM Transactions on Computer Systems (TOCS)
Multitasking without comprimise: a virtual machine evolution
OOPSLA '01 Proceedings of the 16th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
Capability-Based Computer Systems
Capability-Based Computer Systems
Java Virtual Machine Specification
Java Virtual Machine Specification
Java Microarchitectures
Proceedings of the 10th international conference on Architectural support for programming languages and operating systems
An Architectural Technique for Cache-level Garbage Collection
Proceedings of the 5th ACM Conference on Functional Programming Languages and Computer Architecture
DMMX: dynamic memory management extensions
Journal of Systems and Software
Hardware support for memory protection: Capability implementations
ASPLOS I Proceedings of the first international symposium on Architectural support for programming languages and operating systems
Memory System Behavior of Java-Based Middleware
HPCA '03 Proceedings of the 9th International Symposium on High-Performance Computer Architecture
Myths and realities: the performance impact of garbage collection
Proceedings of the joint international conference on Measurement and modeling of computer systems
Achieving High Performance via Co-Designed Virtual Machines
IWIA '98 Proceedings of the 1998 International Workshop on Innovative Architecture
SableVM: a research framework for the efficient execution of java bytecode
JVM'01 Proceedings of the 2001 Symposium on JavaTM Virtual Machine Research and Technology Symposium - Volume 1
An object-aware memory architecture
Science of Computer Programming - Special issue on five perspectives on modern memory management: Systems, hardware and theory
MPJ express meets gadget: towards a java code for cosmological simulations
EuroPVM/MPI'06 Proceedings of the 13th European PVM/MPI User's Group conference on Recent advances in parallel virtual machine and message passing interface
Hi-index | 0.00 |
Despite its dominance, object-oriented computation has received scant attention from the architecture community. We propose a novel memory architecture that supports objects and garbage collection (GC). Our architecture is co-designed with a Java Virtual Machine to improve the functionality and efficiency of heap memory management. The architecture is based on an address space for objects accessed using object IDs mapped by a translator to physical addresses. To support this, the system includes object-addressed caches, a hardware GC barrier to allow in-cache GC of objects, and an exposed cache structure cooperatively managed by the JVM. These extend a conventional architecture, without compromising compatibility or performance for legacy binaries. Our innovations enable various improvements such as: a novel technique for parallel and concurrent garbage collection, without requiring any global synchronization; an in-cache garbage collector, which never accesses main memory; concurrent compaction of objects; and elimination of most GC store barrier overhead. We compare the behavior of our system against that of a conventional generational garbage collector, both with and without an explicit allocate-incache operation. Explicit allocation eliminates many write misses; our scheme additionally trades L2 misses for in-cache operations, and provides the mapping indirection required for concurrent compaction.