A very fast algorithm for RAM compression
ACM SIGOPS Operating Systems Review
Code compression for low power embedded system design
Proceedings of the 37th Annual Design Automation Conference
CRAMES: compressed RAM for embedded systems
CODES+ISSS '05 Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis
Adaptive main memory compression
ATEC '05 Proceedings of the annual conference on USENIX Annual Technical Conference
The case for compressed caching in virtual memory systems
ATEC '99 Proceedings of the annual conference on USENIX Annual Technical Conference
IBM memory expansion technology (MXT)
IBM Journal of Research and Development
CASES '06 Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems
MEMMU: Memory expansion for MMU-less embedded systems
ACM Transactions on Embedded Computing Systems (TECS)
High-performance operating system controlled online memory compression
ACM Transactions on Embedded Computing Systems (TECS)
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
C-pack: a high-performance microprocessor cache compression algorithm
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Hi-index | 0.00 |
This article describes a new software-based on-line memory compression algorithm for embedded systems and presents a method of adaptively managing the uncompressed and compressed memory regions during application execution. The primary goal of this work is to save memory in disk-less embedded systems, resulting in greater functionality, smaller size, and lower overall cost, without modifying applications or hardware. In comparison with algorithms that are commonly used in on-line memory compression, our new algorithm has a comparable compression ratio but is twice as fast. The adaptive memory management scheme effectively responds to the predicted needs of applications and prevents on-line memory compression deadlock, permitting reliable and efficient compression for a wide range of applications. We have evaluated our technique on an embedded portable device and have found that the memory available to applications can be increased by 150%, allowing the execution of applications with larger working data sets, or allowing existing applications to run with less physical memory.