O'Reilly logo

Tools for Signal Compression: Applications to Speech and Audio Coding by Nicolas Moreau

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 4

Entropy Coding

4.1. Introduction

Consider a continuous-time signal x(t) bandlimited in the range [−B, + B]. The signal is sampled at a frequency greater than or equal to the Nyquist frequency, fe = 2B. Using this method, a discrete-time signal x(n) is obtained. This signal may be interpreted as the realization of a discrete-time random process X(n). The random process is assumed to have the common stationarity and ergodicity properties.

This process has continuous values. Assume that it has been quantized with a significant resolution. This random process becomes a discrete-valued random process, that is, X(n) takes values from a finite set. In information theory, X(n) is the information source, the finite set image is the input alphabet, and the elements xi are the input symbols or letters of the alphabet. Next, we aim to compress this information. The aim of this chapter is to explain that, with certain hypotheses, carrying out this operation is possible without introducing any distortion. This is known as noiseless or lossless coding or entropy coding. Unfortunately, the allowed compression rates are generally too low. Therefore, a certain level of distortion is tolerated. We can show that a function, known as the rate-distortion function, gives a lower limit for distortion when the bit rate is set or, inversely, a lower limit for the bit rate when the distortion level is ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required