Introduction to Data Compression, Fourth Edition (The Morgan Kaufmann Series in Multimedia Information and Systems)
Format: PDF / Kindle (mobi) / ePub
Each edition of Introduction to Data Compression has widely been considered the best introduction and reference text on the art and science of data compression, and the fourth edition continues in this tradition. Data compression techniques and technology are ever-evolving with new applications in image, speech, text, audio, and video. The fourth edition includes all the cutting edge updates the reader will need during the work day and in class.
Khalid Sayood provides an extensive introduction to the theory underlying today’s compression techniques with detailed instruction for their applications using several examples to explain the concepts. Encompassing the entire field of data compression, Introduction to Data Compression includes lossless and lossy compression, Huffman coding, arithmetic coding, dictionary techniques, context based compression, scalar and vector quantization. Khalid Sayood provides a working knowledge of data compression, giving the reader the tools to develop a complete and concise compression package upon completion of his book.
- New content added to include a more detailed description of the JPEG 2000 standard
- New content includes speech coding for internet applications
- Explains established and emerging standards in depth including JPEG 2000, JPEG-LS, MPEG-2, H.264, JBIG 2, ADPCM, LPC, CELP, MELP, and iLBC
- Source code provided via companion web site that gives readers the opportunity to build their own algorithms, choose and implement techniques in their own applications
an arbitrary iid source. These functions and bounds are especially useful when we want to know if it is possible to design compression schemes to provide a specified rate and distortion given a particular source. They are also useful in determining the amount of performance improvement that we could obtain by designing a better compression scheme. In these ways, the rate distortion function plays the same role for lossy compression that entropy plays for lossless compression. 8.6 Models As
therefore, the total number of bits required to represent this sequence is 30. Now let’s take the same sequence and look at it in blocks of two. Obviously, there are only two symbols, 1 2, and 3 3. The probabilities are , and the entropy is 1 bit/symbol. As there are 10 such symbols in the sequence, we need a total of 10 bits to represent the entire sequence—a reduction of a factor of three. The theory says we can always extract the structure of the data by taking larger and larger block sizes;
long periods of stationary signals, they also generally contain a significant amount of transient signals. The AAC algorithm makes clever use of the time frequency duality to handle this situation. The standard contains two kinds of predictors: an intrablock predictor, referred to as Temporal Noise Shaping (TNS), and an interblock predictor. The interblock predictor is used during stationary periods. During these periods it is reasonable to assume that the coefficients at a certain frequency do
overload regions, 362 SNR. See Signal-to-noise ratio SNR scalability mode, 662 SOC marker. See Start of codestream marker SOD marker. See Start of data marker Solomonoff, R., 37 SOP marker. See Start of packet marker SOT marker. See Start of tile-part marker Sound pressure level (SPL), 572–573 Source coder, 219, 220 Spatial scalability mode, 662 Spatial orientation trees, 540 Spectral masking, 571 audibility threshold changes, 571 critical band, 570–571 Spectral processing, 583
filters, 465–467 upsampled signal spectrum, 467 V V. 42 bis, 153, 155, 156 CCITT recommends, 156 control codewords in, 155 encoder STEPUP, 156 using compression algorithm, 153, 155 Variance, 685 Variations on theme 332. See also Vector quantization adaptive vector quantization, 335–336 distortion, 336–337 indexed vector quantizer, 336 large codebook, 336 gain-shape vector quantization, 332 mean-removed vector quantization, 332–333 Sinan image using codebook, 332, 333 multistage