Data compression

  • Authors:
  • David Salomon

  • Affiliations:
  • Computer Science Department, California State University, Northridge, CA

  • Venue:
  • Handbook of massive data sets
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

The exponential growth of computer applications in the last three decades of the 20th century has resulted in an explosive growth in the amounts of data moved between computers, collected, and stored by computer users. This, in turn, has created the field of data compression. Practically unknown in the 1960s, this discipline has now come of age. It is based on information theory, and has proved its value by providing us with fast, sophisticated methods capable of high compression ratios.This chapter tries to achieve two purposes. Its main aim is to present the principles of compressing different types of data, such as text, images, and sound. Its secondary goal is to outline the principles of the most important compression algorithms. The main sections discuss statistical compression methods, dictionary-based methods, methods for the compression of still images, of video, and of audio data. In addition, there is a short section devoted to wavelet methods, since these seem to hold much promise for the future.