O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Matrix Analysis and Applications

Book Description

This balanced and comprehensive study presents the theory, methods, and applications of matrix analysis in a new theoretical framework, allowing readers to understand second-order and higher-order matrix analysis in a completely new light. Alongside the core subjects in matrix analysis, such as singular value analysis, solving matrix equations and eigenanalysis, the author introduces new applications and perspectives that are unique to this book. As a very topical subject matter, gradient analysis and optimization plays a central role here. Also included are subspace analysis, projection analysis, and tensor analysis, topics which are often neglected in other books. Having provided a solid foundation to the subject, the author goes on to place particular emphasis on the many applications matrix analysis can have in science and engineering, making this book suitable for scientists, engineers, and graduate students alike.

Table of Contents

  1. Cover
  2. Half-title page
  3. Title Page
  4. Copyright page
  5. Dedication
  6. Contents
  7. Preface
  8. Notation
  9. Abbreviations
  10. Algorithms
  11. Part I Matrix Algebra
    1. 1 Introduction to Matrix Algebra
      1. 1.1 Basic Concepts of Vectors and Matrices
        1. 1.1.1 Vectors and Matrices
        2. 1.1.2 Basic Vector Calculus
        3. 1.1.3 Basic Matrix Calculus
        4. 1.1.4 Linear Independence of Vectors
        5. 1.1.5 Matrix Functions
      2. 1.2 Elementary Row Operations and Applications
        1. 1.2.1 Elementary Row Operations
        2. 1.2.2 Gauss Elimination Methods
      3. 1.3 Sets, Vector Subspaces and Linear Mapping
        1. 1.3.1 Sets
        2. 1.3.2 Fields and Vector Spaces
        3. 1.3.3 Linear Mapping
      4. 1.4 Inner Products and Vector Norms
        1. 1.4.1 Inner Products of Vectors
        2. 1.4.2 Norms of Vectors
        3. 1.4.3 Similarity Comparison Between Vectors
        4. 1.4.4 Banach Space, Euclidean Space, Hilbert Space
        5. 1.4.5 Inner Products and Norms of Matrices
      5. 1.5 Random Vectors
        1. 1.5.1 Statistical Interpretation of Random Vectors
        2. 1.5.2 Gaussian Random Vectors
      6. 1.6 Performance Indexes of Matrices
        1. 1.6.1 Quadratic Forms
        2. 1.6.2 Determinants
        3. 1.6.3 Matrix Eigenvalues
        4. 1.6.4 Matrix Trace
        5. 1.6.5 Matrix Rank
      7. 1.7 Inverse Matrices and Pseudo-Inverse Matrices
        1. 1.7.1 Definition and Properties of Inverse Matrices
        2. 1.7.2 Matrix Inversion Lemma
        3. 1.7.3 Inversion of Hermitian Matrices
        4. 1.7.4 Left and Right Pseudo-Inverse Matrices
      8. 1.8 Moore–Penrose Inverse Matrices
        1. 1.8.1 Definition and Properties
        2. 1.8.2 Computation of Moore–Penrose Inverse Matrix
      9. 1.9 Direct Sum and Hadamard Product
        1. 1.9.1 Direct Sum of Matrices
        2. 1.9.2 Hadamard Product
      10. 1.10 Kronecker Products and Khatri–Rao Product
        1. 1.10.1 Kronecker Products
        2. 1.10.2 Generalized Kronecker Products
        3. 1.10.3 Khatri–Rao Product
      11. 1.11 Vectorization and Matricization
        1. 1.11.1 Vectorization and Commutation Matrix
        2. 1.11.2 Matricization of a Vector
        3. 1.11.3 Properties of Vectorization Operator
      12. 1.12 Sparse Representations
        1. 1.12.1 Sparse Vectors and Sparse Representations
        2. 1.12.2 Sparse Representation of Face Recognition
      13. Exercises
    2. 2 Special Matrices
      1. 2.1 Hermitian Matrices
      2. 2.2 Idempotent Matrix
      3. 2.3 Permutation Matrix
        1. 2.3.1 Permutation Matrix and Exchange Matrix
        2. 2.3.2 Generalized Permutation Matrix
      4. 2.4 Orthogonal Matrix and Unitary Matrix
      5. 2.5 Band Matrix and Triangular Matrix
        1. 2.5.1 Band Matrix
        2. 2.5.2 Triangular Matrix
      6. 2.6 Summing Vector and Centering Matrix
        1. 2.6.1 Summing Vector
        2. 2.6.2 Centering Matrix
      7. 2.7 Vandermonde Matrix and Fourier Matrix
        1. 2.7.1 Vandermonde Matrix
        2. 2.7.2 Fourier Matrix
        3. 2.7.3 Index Vectors
        4. 2.7.4 FFT Algorithm
      8. 2.8 Hadamard Matrix
      9. 2.9 Toeplitz Matrix
        1. 2.9.1 Symmetric Toeplitz Matrix
        2. 2.9.2 Discrete Cosine Transform of Toeplitz Matrix
      10. Exercises
    3. 3 Matrix Differential
      1. 3.1 Jacobian Matrix and Gradient Matrix
        1. 3.1.1 Jacobian Matrix
        2. 3.1.2 Gradient Matrix
        3. 3.1.3 Calculation of Partial Derivative and Gradient
      2. 3.2 Real Matrix Differential
        1. 3.2.1 Calculation of Real Matrix Differential
        2. 3.2.2 Jacobian Matrix Identification
        3. 3.2.3 Jacobian Matrix of Real Matrix Functions
      3. 3.3 Real Hessian Matrix and Identification
        1. 3.3.1 Real Hessian Matrix
        2. 3.3.2 Real Hessian Matrix Identification
      4. 3.4 Complex Gradient Matrices
        1. 3.4.1 Holomorphic Function and Complex Partial Derivative
        2. 3.4.2 Complex Matrix Differential
        3. 3.4.3 Complex Gradient Matrix Identification
      5. 3.5 Complex Hessian Matrices and Identification
        1. 3.5.1 Complex Hessian Matrices
        2. 3.5.2 Complex Hessian Matrix Identification
      6. Exercises
  12. Part II Matrix Analysis
    1. 4 Gradient Analysis and Optimization
      1. 4.1 Real Gradient Analysis
        1. 4.1.1 Stationary Points and Extreme Points
        2. 4.1.2 Real Gradient Analysis of f(x)
        3. 4.1.3 Real Gradient Analysis of f(X)
      2. 4.2 Gradient Analysis of Complex Variable Function
        1. 4.2.1 Extreme Point of Complex Variable Function
        2. 4.2.2 Complex Gradient Analysis
      3. 4.3 Convex Sets and Convex Function Identification
        1. 4.3.1 Standard Constrained Optimization Problems
        2. 4.3.2 Convex Sets and Convex Functions
        3. 4.3.3 Convex Function Identification
      4. 4.4 Gradient Methods for Smooth Convex Optimization
        1. 4.4.1 Gradient Method
        2. 4.4.2 Conjugate Gradient Method
        3. 4.4.3 Convergence Rates
      5. 4.5 Nesterov Optimal Gradient Method
        1. 4.5.1 Lipschitz Continuous Function
        2. 4.5.2 Nesterov Optimal Gradient Algorithms
      6. 4.6 Nonsmooth Convex Optimization
        1. 4.6.1 Subgradient and Subdifferential
        2. 4.6.2 Proximal Operator
        3. 4.6.3 Proximal Gradient Method
      7. 4.7 Constrained Convex Optimization
        1. 4.7.1 Lagrange Multiplier Method
        2. 4.7.2 Penalty Function Method
        3. 4.7.3 Augmented Lagrange Multiplier Method
        4. 4.7.4 Lagrangian Dual Method
        5. 4.7.5 Karush–Kuhn–Tucker Conditions
        6. 4.7.6 Alternating Direction Method of Multipliers
      8. 4.8 Newton Methods
        1. 4.8.1 Newton Method for Unconstrained Optimization
        2. 4.8.2 Newton Method for Constrained Optimization
      9. 4.9 Original–Dual Interior-Point Method
        1. 4.9.1 Original–Dual Problems
        2. 4.9.2 First-Order Original–Dual Interior-Point Method
        3. 4.9.3 Second-Order Original–Dual Interior-Point Method
      10. Exercises
    2. 5 Singular Value Analysis
      1. 5.1 Numerical Stability and Condition Number
      2. 5.2 Singular Value Decomposition (SVD)
        1. 5.2.1 Singular Value Decomposition
        2. 5.2.2 Properties of Singular Values
        3. 5.2.3 Rank-Deficient Least Squares Solutions
      3. 5.3 Product Singular Value Decomposition (PSVD)
        1. 5.3.1 PSVD Problem
        2. 5.3.2 Accurate Calculation of PSVD
      4. 5.4 Applications of Singular Value Decomposition
        1. 5.4.1 Static Systems
        2. 5.4.2 Image Compression
      5. 5.5 Generalized Singular Value Decomposition (GSVD)
        1. 5.5.1 Definition and Properties
        2. 5.5.2 Algorithms for GSVD
        3. 5.5.3 Two Application Examples of GSVD
      6. 5.6 Low-Rank–Sparse Matrix Decomposition
        1. 5.6.1 Matrix Decomposition Problems
        2. 5.6.2 Singular Value Thresholding
        3. 5.6.3 Robust Principal Component Analysis
      7. 5.7 Matrix Completion
        1. 5.7.1 Matrix Completion Problems
        2. 5.7.2 Matrix Completion Model and Incoherence
        3. 5.7.3 Singular Value Thresholding Algorithm
        4. 5.7.4 Fast and Accurate Matrix Completion
      8. Exercises
    3. 6 Solving Matrix Equations
      1. 6.1 Least Squares Method
        1. 6.1.1 Ordinary Least Squares Methods
        2. 6.1.2 Properties of Least Squares Solutions
        3. 6.1.3 Data Least Squares
      2. 6.2 Tikhonov Regularization and Gauss–Seidel Method
        1. 6.2.1 Tikhonov Regularization
        2. 6.2.2 Regularized Gauss–Seidel Method
      3. 6.3 Total Least Squares (TLS) Methods
        1. 6.3.1 TLS Problems
        2. 6.3.2 TLS Solution
        3. 6.3.3 Performances of TLS Solution
        4. 6.3.4 Generalized Total Least Squares
        5. 6.3.5 Total Least Squares Fitting
        6. 6.3.6 Total Maximum Likelihood Method
      4. 6.4 Constrained Total Least Squares
        1. 6.4.1 Constrained Total Least Squares Method
        2. 6.4.2 Harmonic Superresolution
        3. 6.4.3 Image Restoration
      5. 6.5 Subspace Method for Solving Blind Matrix Equations
      6. 6.6 Nonnegative Matrix Factorization: Optimization Theory
        1. 6.6.1 Nonnegative Matrices
        2. 6.6.2 Nonnegativity and Sparsity Constraints
        3. 6.6.3 Nonnegative Matrix Factorization Model
        4. 6.6.4 Divergences and Deformed Logarithm
      7. 6.7 Nonnegative Matrix Factorization: Optimization Algorithms
        1. 6.7.1 Multiplication Algorithms
        2. 6.7.2 Nesterov Optimal Gradient Algorithm
        3. 6.7.3 Alternating Nonnegative Least Squares
        4. 6.7.4 Quasi-Newton Method
        5. 6.7.5 Sparse Nonnegative Matrix Factorization
      8. 6.8 Sparse Matrix Equation Solving: Optimization Theory
        1. 6.8.1 ℓ[sub(1)]-Norm Minimization
        2. 6.8.2 Lasso and Robust Linear Regression
        3. 6.8.3 Mutual Coherence and RIP Conditions
        4. 6.8.4 Relation to Tikhonov Regularization
        5. 6.8.5 Gradient Analysis of ℓ[sub(1)]-Norm Minimization
      9. 6.9 Sparse Matrix Equation Solving: Optimization Algorithms
        1. 6.9.1 Basis Pursuit Algorithms
        2. 6.9.2 First-Order Augmented Lagrangian Algorithm
        3. 6.9.3 Barzilai–Borwein Gradient Projection Algorithm
        4. 6.9.4 ADMM Algorithms for Lasso Problems
        5. 6.9.5 LARS Algorithms for Lasso Problems
        6. 6.9.6 Covariance Graphical Lasso Method
        7. 6.9.7 Homotopy Algorithm
        8. 6.9.8 Bregman Iteration Algorithms
      10. Exercises
    4. 7 Eigenanalysis
      1. 7.1 Eigenvalue Problem and Characteristic Equation
        1. 7.1.1 Eigenvalue Problem
        2. 7.1.2 Characteristic Polynomial
      2. 7.2 Eigenvalues and Eigenvectors
        1. 7.2.1 Eigenvalues
        2. 7.2.2 Eigenvectors
      3. 7.3 Similarity Reduction
        1. 7.3.1 Similarity Transformation of Matrices
        2. 7.3.2 Similarity Reduction of Matrices
        3. 7.3.3 Similarity Reduction of Matrix Polynomials
      4. 7.4 Polynomial Matrices and Balanced Reduction
        1. 7.4.1 Smith Normal Forms
        2. 7.4.2 Invariant Factor Method
        3. 7.4.3 Conversion of Jordan Form and Smith Form
        4. 7.4.4 Finding Smith Blocks from Jordan Blocks
        5. 7.4.5 Finding Jordan Blocks from Smith Blocks
      5. 7.5 Cayley–Hamilton Theorem with Applications
        1. 7.5.1 Cayley–Hamilton Theorem
        2. 7.5.2 Computation of Inverse Matrices
        3. 7.5.3 Computation of Matrix Powers
        4. 7.5.4 Calculation of Matrix Exponential Functions
      6. 7.6 Application Examples of Eigenvalue Decomposition
        1. 7.6.1 Pisarenko Harmonic Decomposition
        2. 7.6.2 Discrete Karhunen–Loeve Transformation
        3. 7.6.3 Principal Component Analysis
      7. 7.7 Generalized Eigenvalue Decomposition (GEVD)
        1. 7.7.1 Generalized Eigenvalue Decomposition
        2. 7.7.2 Total Least Squares Method for GEVD
        3. 7.7.3 Application of GEVD: ESPRIT
        4. 7.7.4 Similarity Transformation in GEVD
      8. 7.8 Rayleigh Quotient
        1. 7.8.1 Definition and Properties of Rayleigh Quotient
        2. 7.8.2 Rayleigh Quotient Iteration
        3. 7.8.3 Algorithms for Rayleigh Quotient
      9. 7.9 Generalized Rayleigh Quotient
        1. 7.9.1 Definition and Properties
        2. 7.9.2 Effectiveness of Class Discrimination
        3. 7.9.3 Robust Beamforming
      10. 7.10 Quadratic Eigenvalue Problems
        1. 7.10.1 Description of Quadratic Eigenvalue Problems
        2. 7.10.2 Solving Quadratic Eigenvalue Problems
        3. 7.10.3 Application Examples
      11. 7.11 Joint Diagonalization
        1. 7.11.1 Joint Diagonalization Problems
        2. 7.11.2 Orthogonal Approximate Joint Diagonalization
        3. 7.11.3 Nonorthogonal Approximate Joint Diagonalization
      12. Exercises
    5. 8 Subspace Analysis and Tracking
      1. 8.1 General Theory of Subspaces
        1. 8.1.1 Bases of Subspaces
        2. 8.1.2 Disjoint Subspaces and Orthogonal Complement
      2. 8.2 Column Space, Row Space and Null Space
        1. 8.2.1 Definitions and Properties
        2. 8.2.2 Subspace Basis Construction
        3. 8.2.3 SVD-Based Orthonormal Basis Construction
        4. 8.2.4 Basis Construction of Subspaces Intersection
      3. 8.3 Subspace Methods
        1. 8.3.1 Signal Subspace and Noise Subspace
        2. 8.3.2 Multiple Signal Classification (MUSIC)
        3. 8.3.3 Subspace Whitening
      4. 8.4 Grassmann Manifold and Stiefel Manifold
        1. 8.4.1 Equivalent Subspaces
        2. 8.4.2 Grassmann Manifold
        3. 8.4.3 Stiefel Manifold
      5. 8.5 Projection Approximation Subspace Tracking (PAST)
        1. 8.5.1 Basic PAST Theory
        2. 8.5.2 PAST Algorithms
      6. 8.6 Fast Subspace Decomposition
        1. 8.6.1 Rayleigh–Ritz Approximation
        2. 8.6.2 Fast Subspace Decomposition Algorithm
      7. Exercises
    6. 9 Projection Analysis
      1. 9.1 Projection and Orthogonal Projection
        1. 9.1.1 Projection Theorem
        2. 9.1.2 Mean Square Estimation
      2. 9.2 Projectors and Projection Matrices
        1. 9.2.1 Projector and Orthogonal Projector
        2. 9.2.2 Projection Matrices
        3. 9.2.3 Derivatives of Projection Matrix
      3. 9.3 Updating of Projection Matrices
        1. 9.3.1 Updating Formulas for Projection Matrices
        2. 9.3.2 Prediction Filters
        3. 9.3.3 Updating of Lattice Adaptive Filter
      4. 9.4 Oblique Projector of Full Column Rank Matrix
        1. 9.4.1 Definition and Properties of Oblique Projectors
        2. 9.4.2 Geometric Interpretation of Oblique Projectors
        3. 9.4.3 Recursion of Oblique Projectors
      5. 9.5 Oblique Projector of Full Row Rank Matrices
        1. 9.5.1 Definition and Properties
        2. 9.5.2 Calculation of Oblique Projection
        3. 9.5.3 Applications of Oblique Projectors
      6. Exercises
  13. Part III Higher-Order Matrix Analysis
    1. 10 Tensor Analysis
      1. 10.1 Tensors and their Presentation
        1. 10.1.1 Tensors
        2. 10.1.2 Tensor Representation
      2. 10.2 Vectorization and Matricization of Tensors
        1. 10.2.1 Vectorization and Horizontal Unfolding
        2. 10.2.2 Longitudinal Unfolding of Tensors
      3. 10.3 Basic Algebraic Operations of Tensors
        1. 10.3.1 Inner Product, Norm and Outer Product
        2. 10.3.2 Mode-n Product of Tensors
        3. 10.3.3 Rank of Tensor
      4. 10.4 Tucker Decomposition of Tensors
        1. 10.4.1 Tucker Decomposition (Higher-Order SVD)
        2. 10.4.2 Third-Order SVD
        3. 10.4.3 Alternating Least Squares Algorithms
      5. 10.5 Parallel Factor Decomposition of Tensors
        1. 10.5.1 Bilinear Model
        2. 10.5.2 Parallel Factor Analysis
        3. 10.5.3 Uniqueness Condition
        4. 10.5.4 Alternating Least Squares Algorithm
      6. 10.6 Applications of Low-Rank Tensor Decomposition
        1. 10.6.1 Multimodal Data Fusion
        2. 10.6.2 Fusion of Multimodal Brain Images
        3. 10.6.3 Process Monitoring
        4. 10.6.4 Note on Other Applications
      7. 10.7 Tensor Eigenvalue Decomposition
        1. 10.7.1 Tensor–Vector Products
        2. 10.7.2 Determinants and Eigenvalues of Tensors
        3. 10.7.3 Generalized Tensor Eigenvalues Problems
        4. 10.7.4 Orthogonal Decomposition of Symmetric Tensors
      8. 10.8 Preprocessing and Postprocessing
        1. 10.8.1 Centering and Scaling of Multi-Way Data
        2. 10.8.2 Compression of Data Array
      9. 10.9 Nonnegative Tensor Decomposition Algorithms
        1. 10.9.1 Multiplication Algorithm
        2. 10.9.2 ALS Algorithms
      10. 10.10 Tensor Completion
        1. 10.10.1 Simultaneous Tensor Decomposition and Completion
        2. 10.10.2 Smooth PARAFAC Tensor Completion
      11. 10.11 Software
      12. Exercises
  14. References
  15. Index