Potential benefits of delta encoding and data compression for HTTP
SIGCOMM '97 Proceedings of the ACM SIGCOMM '97 conference on Applications, technologies, architectures, and protocols for computer communication
Algorithms for Manipulating Compressed Images
IEEE Computer Graphics and Applications
Performing joins without decompression in a compressed database system
ACM SIGMOD Record
An overview of the BlueGene/L Supercomputer
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
System Management in the BlueGene/L Supercomputer
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Compressed Domain Transcoding of MPEG
ICMCS '98 Proceedings of the IEEE International Conference on Multimedia Computing and Systems
PPMexe: PPM for Compressing Software
DCC '02 Proceedings of the Data Compression Conference
High Density Compression of Log Files
DCC '04 Proceedings of the Conference on Data Compression
Deep scientific computing requires deep data
IBM Journal of Research and Development
Filtering Failure Logs for a BlueGene/L Prototype
DSN '05 Proceedings of the 2005 International Conference on Dependable Systems and Networks
A universal algorithm for sequential data compression
IEEE Transactions on Information Theory
Hi-index | 0.00 |
The growing computational and storage needs of several scientific applications mandate the deployment of extreme-scale parallel machines, such as IBM's Blue Gene/L which can accommodate as many as 128K processors. One of the biggest challenges these systems face, is to manage generated system logs while deploying in production environments. Large amount of log data is created over extended period of time, across thousands of processors. These logs generated can be voluminous because of the large temporal and spatial dimensions, and containing records which are repeatedly entered to the log archive. Storing and transferring such large amount of log data is a challenging problem. Commonly used generic compression utilities are not optimal for such large amount of data considering a number of performance requirements. In this paper we propose a compression algorithm which preprocesses these logs before trying out any standard compression utilities. The compression ratios and times for the combination shows 28.3% improvement in compression ratio and 43.4% improvement in compression time on average over different generic compression utilities. The test data used is log data produced by 64 racks, 65536 processor Blue Gene/L installation at Lawrence Livermore National Laboratory.