You are previewing Principles of Digital Communication.
O'Reilly logo
Principles of Digital Communication

Book Description

The renowned communications theorist Robert Gallager brings his lucid writing style to the study of the fundamental system aspects of digital communication for a one-semester course for graduate students. With the clarity and insight that have characterized his teaching and earlier textbooks, he develops a simple framework and then combines this with careful proofs to help the reader understand modern systems and simplified models in an intuitive yet precise way. A strong narrative and links between theory and practice reinforce this concise, practical presentation. The book begins with data compression for arbitrary sources. Gallager then describes how to modulate the resulting binary data for transmission over wires, cables, optical fibers, and wireless channels. Analysis and intuitive interpretations are developed for channel noise models, followed by coverage of the principles of detection, coding, and decoding. The various concepts covered are brought together in a description of wireless communication, using CDMA as a case study.

Table of Contents

  1. Cover
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Preface
  6. Acknowledgements
  7. 1 Introduction to Digital Communication
    1. 1.1 Standardized Interfaces and Layering
    2. 1.2 Communication Sources
      1. 1.2.1 Source Coding
    3. 1.3 Communication Channels
      1. 1.3.1 Channel Encoding (Modulation)
      2. 1.3.2 Error Correction
    4. 1.4 Digital Interface
      1. 1.4.1 Network Aspects of the Digital Interface
    5. 1.5 Supplementary Reading
  8. 2 Coding for Discrete Sources
    1. 2.1 Introduction
    2. 2.2 Fixed-Length Codes for Discrete Sources
    3. 2.3 Variable-Length Codes for Discrete Sources
      1. 2.3.1 Unique Decodability
      2. 2.3.2 Prefix-Free Codes for Discrete Sources
      3. 2.3.3 The Kraft Inequality for Prefix-Free Codes
    4. 2.4 Probability Models for Discrete Sources
      1. 2.4.1 Discrete Memoryless Sources
    5. 2.5 Minimum L for Prefix-Free Codes
      1. 2.5.1 Lagrange Multiplier Solution for the Minimum L
      2. 2.5.2 Entropy Bounds on L
      3. 2.5.3 Huffman’s Algorithm for Optimal Source Codes
    6. 2.6 Entropy and Fixed-To-Variable-Length Codes
      1. 2.6.1 Fixed-To-Variable-Length Codes
    7. 2.7 The AEP and the Source Coding Theorems
      1. 2.7.1 The Weak Law of Large Numbers
      2. 2.7.2 The Asymptotic Equipartition Property
      3. 2.7.3 Source Coding Theorems
      4. 2.7.4 The Entropy Bound for General Classes of Codes
    8. 2.8 Markov Sources
      1. 2.8.1 Coding for Markov Sources
      2. 2.8.2 Conditional Entropy
    9. 2.9 Lempel–Ziv Universal Data Compression
      1. 2.9.1 The LZ77 Algorithm
      2. 2.9.2 Why LZ77 Works
      3. 2.9.3 Discussion
    10. 2.10 Summary of Discrete Source Coding
    11. 2.11 Exercises
  9. 3 Quantization
    1. 3.1 Introduction to Quantization
    2. 3.2 Scalar Quantization
      1. 3.2.1 Choice of Intervals for Given Representation Points
      2. 3.2.2 Choice of Representation Points for Given Intervals
      3. 3.2.3 The Lloyd–Max Algorithm
    3. 3.3 Vector Quantization
    4. 3.4 Entropy-Coded Quantization
    5. 3.5 High-Rate Entropy-Coded Quantization
    6. 3.6 Differential Entropy
    7. 3.7 Performance of Uniform High-Rate Scalar Quantizers
    8. 3.8 High-Rate Two-Dimensional Quantizers
    9. 3.9 Summary of Quantization
    10. 3.10 Appendixes
      1. 3.10.1 Nonuniform Scalar Quantizers
      2. 3.10.2 Nonuniform 2D Quantizers
    11. 3.11 Exercises
  10. 4 Source and Channel Waveforms
    1. 4.1 Introduction
      1. 4.1.1 Analog Sources
      2. 4.1.2 Communication Channels
    2. 4.2 Fourier Series
      1. 4.2.1 Finite-Energy Waveforms
    3. 4.3 L[sub(2)] Functions and Lebesgue Integration Over [–T/2, T/2]
      1. 4.3.1 Lebesgue Measure for a Union of Intervals
      2. 4.3.2 Measure for More General Sets
      3. 4.3.3 Measurable Functions and Integration Over [–T/2, T/2]
      4. 4.3.4 Measurability of Functions Defined by Other Functions
      5. 4.3.5 L[sub(1)] and L[sub(2)] Functions Over [–T/2, T/2]
    4. 4.4 Fourier Series for L[sub(2)] Waveforms
      1. 4.4.1 The T-spaced Truncated Sinusoid Expansion
    5. 4.5 Fourier Transforms and L[sub(2)] Waveforms
      1. 4.5.1 Measure and Integration Over R
      2. 4.5.2 Fourier Transforms of L[sub(2)] Functions
    6. 4.6 The DTFT and the Sampling Theorem
      1. 4.6.1 The Discrete-Time Fourier Transform
      2. 4.6.2 The Sampling Theorem
      3. 4.6.3 Source Coding Using Sampled Waveforms
      4. 4.6.4 The sampling Theorem for [Δ – W, Δ + W]
    7. 4.7 Aliasing and the Sinc-Weighted Sinusoid Expansion
      1. 4.7.1 The T-spaced Sinc-Weighted Sinusoid Expansion
      2. 4.7.2 Degrees of Freedom
      3. 4.7.3 Aliasing – a Time-Domain Approach
      4. 4.7.4 Aliasing – a Frequency-Domain Approach
    8. 4.8 Summary
    9. 4.9 Appendix: Supplementary Material and Proofs
      1. 4.9.1 Countable Sets
      2. 4.9.2 Finite Unions of Intervals Over [–T/2, T/2]
      3. 4.9.3 Countable Unions and Outer Measure Over [–T/2, T/2]
      4. 4.9.4 Arbitrary Measurable Sets Over [–T/2, T/2]
    10. 4.10 Exercises
  11. 5 Vector Spaces and Signal Space
    1. 5.1 Axioms and Basic Properties of Vector Spaces
      1. 5.1.1 Finite-Dimensional Vector Spaces
    2. 5.2 Inner Product Spaces
      1. 5.2.1 The Inner Product Spaces R[sup(n)] and C[sup(n)]
      2. 5.2.2 One-Dimensional Projections
      3. 5.2.3 The Inner Product Space of L[sub(2)] Functions
      4. 5.2.4 Subspaces of Inner Product Spaces
    3. 5.3 Orthonormal Bases and the Projection Theorem
      1. 5.3.1 Finite-Dimensional Projections
      2. 5.3.2 Corollaries of the Projection Theorem
      3. 5.3.3 Gram–Schmidt Orthonormalization
      4. 5.3.4 Orthonormal Expansions in L[sub(2)]
    4. 5.4 Summary
    5. 5.5 Appendix: Supplementary Material and Proofs
      1. 5.5.1 The Plancherel Theorem
      2. 5.5.2 The Sampling and Aliasing Theorems
      3. 5.5.3 Prolate Spheroidal Waveforms
    6. 5.6 Exercises
  12. 6 Channels, Modulation, and Demodulation
    1. 6.1 Introduction
    2. 6.2 Pulse Amplitude Modulation (PAM)
      1. 6.2.1 Signal Constellations
      2. 6.2.2 Channel Imperfections: A Preliminary View
      3. 6.2.3 Choice of the Modulation Pulse
      4. 6.2.4 PAM Demodulation
    3. 6.3 The Nyquist Criterion
      1. 6.3.1 Band-edge Symmetry
      2. 6.3.2 Choosing {p(t – kT); k ∈ Z} as an Orthonormal Set
      3. 6.3.3 Relation Between PAM and Analog Source Coding
    4. 6.4 Modulation: Baseband to Passband and Back
      1. 6.4.1 Double-Sideband Amplitude Modulation
    5. 6.5 Quadrature Amplitude Modulation (QAM)
      1. 6.5.1 QAM Signal Set
      2. 6.5.2 QAM Baseband Modulation and Demodulation
      3. 6.5.3 QAM: Baseband to Passband and Back
      4. 6.5.4 Implementation of QAM
    6. 6.6 Signal Space and Degrees of Freedom
      1. 6.6.1 Distance and Orthogonality
    7. 6.7 Carrier and Phase Recovery in QAM Systems
      1. 6.7.1 Tracking Phase in the Presence of Noise
      2. 6.7.2 Large Phase Errors
    8. 6.8 Summary of Modulation and Demodulation
    9. 6.9 Exercises
  13. 7 Random Processes and Noise
    1. 7.1 Introduction
    2. 7.2 Random Processes
      1. 7.2.1 Examples of Random Processes
      2. 7.2.2 The Mean and Covariance of a Random Process
      3. 7.2.3 Additive Noise Channels
    3. 7.3 Gaussian Random Variables, Vectors, and Processes
      1. 7.3.1 The Covariance Matrix of a Jointly Gaussian Random Vector
      2. 7.3.2 The Probability Density of a Jointly Gaussian Random Vector
      3. 7.3.3 Special Case of a 2D Zero-Mean Gaussian Random Vector
      4. 7.3.4 Z = AW, Where A is Orthogonal
      5. 7.3.5 Probability Density for Gaussian Vectors in Terms of Principal Axes
      6. 7.3.6 Fourier Transforms for Joint Densities
    4. 7.4 Linear Functionals and Filters for Random Processes
      1. 7.4.1 Gaussian Processes Defined Over Orthonormal Expansions
      2. 7.4.2 Linear Filtering of Gaussian Processes
      3. 7.4.3 Covariance for Linear Functionals and Filters
    5. 7.5 Stationarity and Related Concepts
      1. 7.5.1 Wide-Sense Stationary (WSS) Random Processes
      2. 7.5.2 Effectively Stationary and Effectively WSS Random Processes
      3. 7.5.3 Linear Functionals for Effectively WSS Random Processes
      4. 7.5.4 Linear Filters for Effectively WSS Random Processes
    6. 7.6 Stationarity in the Frequency Domain
    7. 7.7 White Gaussian noise
      1. 7.7.1 The Sinc Expansion as an Approximation to WGN
      2. 7.7.2 Poisson Process Noise
    8. 7.8 Adding Noise to Modulated Communication
      1. 7.8.1 Complex Gaussian Random Variables and Vectors
    9. 7.9 Signal-To-Noise Ratio
    10. 7.10 Summary of Random Processes
    11. 7.11 Appendix: Supplementary Topics
      1. 7.11.1 Properties of Covariance Matrices
      2. 7.11.2 The Fourier Series Expansion of a Truncated Random Process
      3. 7.11.3 Uncorrelated Coefficients in a Fourier Series
      4. 7.11.4 The Karhunen–Loeve Expansion
    12. 7.12 Exercises
  14. 8 Detection, Coding, and Decoding
    1. 8.1 Introduction
    2. 8.2 Binary Detection
    3. 8.3 Binary Signals in White Gaussian Noise
      1. 8.3.1 Detection for PAM Antipodal Signals
      2. 8.3.2 Detection for Binary Nonantipodal Signals
      3. 8.3.3 Detection for Binary Real Vectors in WGN
      4. 8.3.4 Detection for Binary Complex Vectors in WGN
      5. 8.3.5 Detection of Binary Antipodal Waveforms in WGN
    4. 8.4 M-ary Detection and Sequence Detection
      1. 8.4.1 M-ary Detection
      2. 8.4.2 Successive Transmissions of QAM Signals in WGN
      3. 8.4.3 Detection with Arbitrary Modulation Schemes
    5. 8.5 Orthogonal Signal Sets and Simple Channel Coding
      1. 8.5.1 Simplex Signal Sets
      2. 8.5.2 Biorthogonal Signal Sets
      3. 8.5.3 Error Probability for Orthogonal Signal Sets
    6. 8.6 Block Coding
      1. 8.6.1 Binary Orthogonal Codes and Hadamard Matrices
      2. 8.6.2 Reed–Muller Codes
    7. 8.7 Noisy-Channel Coding Theorem
      1. 8.7.1 Discrete Memoryless Channels
      2. 8.7.2 Capacity
      3. 8.7.3 Converse to the Noisy-Channel Coding Theorem
      4. 8.7.4 Noisy-Channel Coding Theorem, Forward Part
      5. 8.7.5 The Noisy-Channel Coding Theorem for WGN
    8. 8.8 Convolutional Codes
      1. 8.8.1 Decoding of Convolutional Codes
      2. 8.8.2 The Viterbi Algorithm
    9. 8.9 Summary of Detection, Coding, and Decoding
    10. 8.10 Appendix: Neyman–Pearson Threshold Tests
    11. 8.11 Exercises
  15. 9 Wireless Digital Communication
    1. 9.1 Introduction
    2. 9.2 Physical Modeling for Wireless Channels
      1. 9.2.1 Free-space, Fixed Transmitting and Receiving Antennas
      2. 9.2.2 Free-space, Moving Antenna
      3. 9.2.3 Moving Antenna, Reflecting Wall
      4. 9.2.4 Reflection from a Ground Plane
      5. 9.2.5 Shadowing
      6. 9.2.6 Moving Antenna, Multiple Reflectors
    3. 9.3 Input/output Models of Wireless Channels
      1. 9.3.1 The System Function and Impulse Response for LTV Systems
      2. 9.3.2 Doppler Spread and Coherence Time
      3. 9.3.3 Delay Spread and Coherence Frequency
    4. 9.4 Baseband System Functions and Impulse Responses
      1. 9.4.1 A Discrete-Time Baseband Model
    5. 9.5 Statistical Channel Models
      1. 9.5.1 Passband and Baseband Noise
    6. 9.6 Data Detection
      1. 9.6.1 Binary Detection in Flat Rayleigh Fading
      2. 9.6.2 Noncoherent Detection with Known Channel Magnitude
      3. 9.6.3 Noncoherent Detection in Flat Rician Fading
    7. 9.7 Channel Measurement
      1. 9.7.1 The use of Probing Signals to Estimate the Channel
      2. 9.7.2 Rake Receivers
    8. 9.8 Diversity
    9. 9.9 CDMA: The IS95 Standard
      1. 9.9.1 Voice Compression
      2. 9.9.2 Channel Coding and Decoding
      3. 9.9.3 Viterbi Decoding for Fading Channels
      4. 9.9.4 Modulation and Demodulation
      5. 9.9.5 Multiaccess Interference in IS95
    10. 9.10 Summary of Wireless Communication
    11. 9.11 Appendix: Error Probability for Noncoherent Detection
    12. 9.12 Exercises
  16. References
  17. Index