You are previewing Mathematical Statistics and Stochastic Processes.
O'Reilly logo
Mathematical Statistics and Stochastic Processes

Book Description

Generally, books on mathematical statistics are restricted to the case of independent identically distributed random variables. In this book however, both this case AND the case of dependent variables, i.e. statistics for discrete and continuous time processes, are studied. This second case is very important for today's practitioners.

Mathematical Statistics and Stochastic Processes is based on decision theory and asymptotic statistics and contains up-to-date information on the relevant topics of theory of probability, estimation, confidence intervals, non-parametric statistics and robustness, second-order processes in discrete and continuous time and diffusion processes, statistics for discrete and continuous time processes, statistical prediction, and complements in probability.

This book is aimed at students studying courses on probability with an emphasis on measure theory and for all practitioners who apply and use statistics and probability on a daily basis.

Table of Contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Preface
  5. Part 1: Mathematical Statistics
    1. Chapter 1: Introduction to Mathematical Statistics
      1. 1.1. Generalities
      2. 1.2. Examples of statistics problems
        1. 1.2.1. Quality control
        2. 1.2.2. Measurement errors
        3. 1.2.3. Filtering
        4. 1.2.4. Confidence intervals
        5. 1.2.5. Homogeneity testing
    2. Chapter 2: Principles of Decision Theory
      1. 2.1. Generalities
      2. 2.2. The problem of choosing a decision function
      3. 2.3. Principles of Bayesian statistics
        1. 2.3.1. Generalities
        2. 2.3.2. Determination of Bayesian decision functions
        3. 2.3.3. Admissibility of Bayes’ rules
      4. 2.4. Complete classes
      5. 2.5. Criticism of decision theory – the asymptotic point of view
      6. 2.6. Exercises
    3. Chapter 3: Conditional Expectation
      1. 3.1. Definition
      2. 3.2. Properties and extension
      3. 3.3. Conditional probabilities and conditional distributions
        1. 3.3.1. Regular version of the conditional probability
        2. 3.3.2. Conditional distributions
        3. 3.3.3. Theorem for integration with respect to the conditional distribution
        4. 3.3.4. Determination of the conditional distributions in the usual cases
      4. 3.4. Exercises
    4. Chapter 4: Statistics and Sufficiency
      1. 4.1. Samples and empirical distributions
        1. 4.1.1. Properties of the empirical distribution and the associated statistics
      2. 4.2. Sufficiency
        1. 4.2.1. The factorization theorem
      3. 4.3. Examples of sufficient statistics – an exponential model
      4. 4.4. Use of a sufficient statistic
      5. 4.5. Exercises
    5. Chapter 5: Point Estimation
      1. 5.1. Generalities
        1. 5.1.1. Definition – examples
        2. 5.1.2. Choice of a preference relation
      2. 5.2. Sufficiency and completeness
        1. 5.2.1. Sufficiency
        2. 5.2.2. Complete statistics
      3. 5.3. The maximum-likelihood method
        1. 5.3.1. Definition
        2. 5.3.2. Maximum likelihood and sufficiency
        3. 5.3.3. Calculating maximum-likelihood estimators
          1. 5.3.3.1. The Newton–Raphson method
      4. 5.4. Optimal unbiased estimators
        1. 5.4.1. Unbiased estimation
          1. 5.4.1.1. Existence of an unbiased estimator
        2. 5.4.2. Unbiased minimum-dispersion estimator
          1. 5.4.2.1. Application to an exponential model
          2. 5.4.2.2. Application to the Gaussian model
          3. 5.4.2.3. Use of the Lehmann–Scheffé theorem
        3. 5.4.3. Criticism of unbiased estimators
      5. 5.5. Efficiency of an estimator
        1. 5.5.1. The Fréchet-Darmois-Cramer-Rao inequality
          1. 5.5.1.1. Calculating I(θ)
          2. 5.5.1.2. Properties of the Fisher information
          3. 5.5.1.3. The case of a biased estimator
        2. 5.5.2. Efficiency
        3. 5.5.3. Extension to Rk
          1. 5.5.3.1. Properties of the information matrix
          2. 5.5.3.2. Efficiency
        4. 5.5.4. The non-regular case
          1. 5.5.4.1. “Superefficient” estimators
          2. 5.5.4.2. Cramer–Rao-type inequalities
      6. 5.6. The linear regression model
        1. 5.6.1. Generalities
        2. 5.6.2. Estimation of the parameter – the Gauss–Markov theorem
      7. 5.7. Exercises
    6. Chapter 6: Hypothesis Testing and Confidence Regions
      1. 6.1. Generalities
        1. 6.1.1. The problem
        2. 6.1.2. Use of decision theory
          1. 6.1.2.1. Preference relation
        3. 6.1.3. Generalization
          1. 6.1.3.1. Preference relation
        4. 6.1.4. Sufficiency
      2. 6.2. The Neyman-Pearson (NP) lemma
      3. 6.3. Multiple hypothesis tests (general methods)
        1. 6.3.1. Testing a simple hypothesis against a composite one
          1. 6.3.1.1. The γ test
          2. 6.3.1.2. The λ test
        2. 6.3.2. General case – unbiased tests
          1. 6.3.2.1. Relation beween unbiased tests and unbiased decision functions
      4. 6.4. Case where the ratio of the likelihoods is monotonic
        1. 6.4.1. Generalities
        2. 6.4.2. Unilateral tests
        3. 6.4.3. Bilateral tests
      5. 6.5. Tests relating to the normal distribution
      6. 6.6. Application to estimation: confidence regions
        1. 6.6.1. First preference relation on confidence regions
        2. 6.6.2. Relation to tests
      7. 6.7. Exercises
    7. Chapter 7: Asymptotic Statistics
      1. 7.1. Generalities
      2. 7.2. Consistency of the maximum likelihood estimator
      3. 7.3. The limiting distribution of the maximum likelihood estimator
      4. 7.4. The likelihood ratio test
      5. 7.5. Exercises
    8. Chapter 8: Non-Parametric Methods and Robustness
      1. 8.1. Generalities
      2. 8.2. Non-parametric estimation
        1. 8.2.1. Empirical estimators
        2. 8.2.2. Distribution and density estimation
          1. 8.2.2.1. Convergence of the estimator
        3. 8.2.3. Regression estimation
      3. 8.3. Non-parametric tests
        1. 8.3.1. The χ2 test
        2. 8.3.2. The Kolmogorov–Smirnov test
        3. 8.3.3. The Cramer–von Mises test
        4. 8.3.4. Rank test
      4. 8.4. Robustness
        1. 8.4.1. An example of a robust test
        2. 8.4.2. An example of a robust estimator
        3. 8.4.3. A general definition of a robust estimator
      5. 8.5. Exercises
  6. Part 2: Statistics for Stochastic Processes
    1. Chapter 9: Introduction to Statistics for Stochastic Processes
      1. 9.1. Modeling a family of observations
      2. 9.2. Processes
        1. 9.2.1. The distribution of a process
        2. 9.2.2. Gaussian processes
        3. 9.2.3. Stationary processes
        4. 9.2.4. Markov processes
      3. 9.3. Statistics for stochastic processes
      4. 9.4. Exercises
    2. Chapter 10: Weakly Stationary Discrete-Time Processes
      1. 10.1. Autocovariance and spectral density
      2. 10.2. Linear prediction and Wold decomposition
      3. 10.3. Linear processes and the ARMA model
        1. 10.3.1. Spectral density of a linear process
      4. 10.4. Estimating the mean of a weakly stationary process
      5. 10.5. Estimating the autocovariance
      6. 10.6. Estimating the spectral density
        1. 10.6.1. The periodogram
        2. 10.6.2. Convergent estimators of the spectral density
      7. 10.7. Exercises
    3. Chapter 11: Poisson Processes – A Probabilistic and Statistical Study
      1. 11.1. Introduction
      2. 11.2. The axioms of Poisson processes
      3. 11.3. Interarrivai time
      4. 11.4. Properties of the Poisson process
      5. 11.5. Notions on generalized Poisson processes
      6. 11.6. Statistics of Poisson processes
      7. 11.7. Exercises
    4. Chapter 12: Square-Integrable Continuous-Time Processes
      1. 12.1. Definitions
      2. 12.2. Mean-square continuity
      3. 12.3. Mean-square integration
      4. 12.4. Mean-square differentiation
      5. 12.5. The Karhunen–Loeve theorem
      6. 12.6. Wiener processes
        1. 12.6.1. Karhunen-Loeve decomposition
        2. 12.6.2. Statistics of Wiener processes
      7. 12.7. Notions on weakly stationary continuous-time processes
        1. 12.7.1. Estimating the mean
        2. 12.7.2. Estimating the autocovariance
        3. 12.7.3. The case of a process observed at discrete instants
      8. 12.8. Exercises
    5. Chapter 13: Stochastic Integration and Diffusion Processes
      1. 13.1. Itô integral
      2. 13.2. Diffusion processes
      3. 13.3. Processes defined by stochastic differential equations and stochastic integrals
      4. 13.4. Notions on statistics for diffusion processes
      5. 13.5. Exercises
    6. Chapter 14: ARMA Processes
      1. 14.1. Autoregressive processes
      2. 14.2. Moving average processes
      3. 14.3. General ARMA processes
      4. 14.4. Non-stationary models
        1. 14.4.1. The Box-Cox transformation
        2. 14.4.2. Eliminating the trend by differentiation
        3. 14.4.3. Eliminating the seasonality
        4. 14.4.4. Introducing exogenous variables
      5. 14.5. Statistics of ARMA processes
        1. 14.5.1. Identification
        2. 14.5.2. Estimation
        3. 14.5.3. Verification
      6. 14.6. Multidimensional processes
      7. 14.7. Exercises
    7. Chapter 15: Prediction
      1. 15.1. Generalities
      2. 15.2. Empirical methods of prediction
        1. 15.2.1. The empirical mean
        2. 15.2.2. Exponential smoothing
        3. 15.2.3. Naive predictors
        4. 15.2.4. Trend adjustment
      3. 15.3. Prediction in the ARIMA model
      4. 15.4. Prediction in continuous time
      5. 15.5. Exercises
  7. Part 3: Supplement
    1. Chapter 16: Elements of Probability Theory
      1. 16.1. Measure spaces: probability spaces
      2. 16.2. Measurable functions: real random variables
      3. 16.3. Integrating real random variables
      4. 16.4. Random vectors
      5. 16.5. Independence
      6. 16.6. Gaussian vectors
      7. 16.7. Stochastic convergence
      8. 16.8. Limit theorems
  8. Appendix: Statistical Tables
    1. A1.1. Random numbers
    2. A1.2. Distribution function of the standard normal distribution
    3. A1.3. Density of the standard normal distribution
    4. A1.4. Percentiles (tp) of Student’s distribution
    5. A1.5. Ninety-fifth percentiles of Fisher–Snedecor distributions
    6. A1.6. Ninety-ninth percentiles of Fisher–Snedecor distributions
    7. A1.7. Percentiles (χ2p) of the χ2 distribution with n degrees of freedom
    8. A1.8. Individual probabilities of the Poisson distribution
    9. A1.9. Cumulative probabilities of the Poisson distribution
    10. A1.10. Binomial coefficients Ckn for n ≤ 30 and 0 ≤ k ≤ 7
    11. A1.11. Binomial coefficients Ckn for n ≤ 30 and 8 ≤ k ≤ 15
  9. Bibliography
  10. Index