Journal of Algorithms
Design and analysis of dynamic Huffman codes
Journal of the ACM (JACM)
ACM Computing Surveys (CSUR)
ACM Computing Surveys (CSUR)
Dr. Dobb's Journal
Text compression
Algorithm 673: Dynamic Huffman coding
ACM Transactions on Mathematical Software (TOMS)
Arithmetic coding for data compression
Communications of the ACM
Data and image compression (4th ed.): tools and techniques
Data and image compression (4th ed.): tools and techniques
The scientist and engineer's guide to digital signal processing
The scientist and engineer's guide to digital signal processing
Introduction to data compression (2nd ed.)
Introduction to data compression (2nd ed.)
A nearly-optimal Fano-based coding algorithm
Information Processing and Management: an International Journal
A fast and efficient nearly-optimal adaptive Fano coding scheme
Information Sciences: an International Journal
A universal algorithm for sequential data compression
IEEE Transactions on Information Theory
Compression of individual sequences via variable-rate coding
IEEE Transactions on Information Theory
A novel lossless data compression scheme based on the error correcting Hamming codes
Computers & Mathematics with Applications
A web search engine model based on index-query bit-level compression
Proceedings of the 1st International Conference on Intelligent Semantic Web-Services and Applications
A criticism of the ACW algorithm
Computers & Mathematics with Applications
Development of a Novel Compressed Index-Query Web Search Engine Model
International Journal of Information Technology and Web Engineering
Hi-index | 0.09 |
This paper presents a new and efficient data compression algorithm, namely, the adaptive character wordlength (ACW) algorithm, which can be used as complementary algorithm to statistical compression techniques. In such techniques, the characters in the source file are converted to a binary code, where the most common characters in the file have the shortest binary codes, and the least common have the longest; the binary codes are generated based on the estimated probability of the character within the file. Then, the binary coded file is compressed using 8 bits character wordlength. In this new algorithm, an optimum character wordlength, b, is calculated, where b8, so that the compression ratio is increased by a factor of b/8. In order to validate this algorithm, it is used as a complement algorithm to Huffman code to compress a source file having 10 characters with different probabilities, and these characters are randomly distributed within the source file. The results obtained and the factors that affect the optimum value of b are discussed, and, finally, conclusions are presented.