You are previewing Markov Chains and Dependability Theory.
O'Reilly logo
Markov Chains and Dependability Theory

Book Description

Dependability metrics are omnipresent in every engineering field, from simple ones through to more complex measures combining performance and dependability aspects of systems. This book presents the mathematical basis of the analysis of these metrics in the most used framework, Markov models, describing both basic results and specialised techniques. The authors first present both discrete and continuous time Markov chains before focusing on dependability measures, which necessitate the study of Markov chains on a subset of states representing different user satisfaction levels for the modelled system. Topics covered include Markovian state lumping, analysis of sojourns on subset of states of Markov chains, analysis of most dependability metrics, fundamentals of performability analysis, and bounding and simulation techniques designed to evaluate dependability measures. The book is of interest to graduate students and researchers in all areas of engineering where the concepts of lifetime, repair duration, availability, reliability and risk are important.

Table of Contents

  1. Cover
  2. Half-title page
  3. Title
  4. Copyright
  5. Contents
  6. 1 Introduction
    1. 1.1 Preliminary words
    2. 1.2 Dependability and performability models
      1. 1.2.1 Basic dependability metrics
      2. 1.2.2 More complex metrics
      3. 1.2.3 Performability
      4. 1.2.4 Some general definitions in dependability
    3. 1.3 Considering subsets of the state space
      1. 1.3.1 Aggregation
      2. 1.3.2 Lumping and performance evaluation
      3. 1.3.3 Lumping and dependability models
      4. 1.3.4 Using lumping to evaluate numerical procedures
    4. 1.4 The contents of this book
  7. 2 Discrete-time Markov chains
    1. 2.1 Definitions and properties
    2. 2.2 Strong Markov property
    3. 2.3 Recurrent and transient states
    4. 2.4 Visits to a fixed state
    5. 2.5 Invariant measures and irreducible Markov chains
    6. 2.6 Aperiodic Markov chains
    7. 2.7 Convergence to steady-state
    8. 2.8 Ergodic theorem
    9. 2.9 Absorbing Markov chains
      1. 2.9.1 Application to irreducible Markov chains
      2. 2.9.2 Computational aspects
  8. 3 Continuous-time Markov chains
    1. 3.1 Definitions and properties
    2. 3.2 Transition function matrix
    3. 3.3 Backward and forward equations
    4. 3.4 Uniformization
    5. 3.5 Limiting behavior
    6. 3.6 Recurrent and transient states
      1. 3.6.1 General case
      2. 3.6.2 Irreducible case
    7. 3.7 Ergodic theorem
    8. 3.8 Absorbing Markov chains
  9. 4 State aggregation
    1. 4.1 State aggregation in irreducible DTMC
      1. 4.1.1 Introduction and notation
      2. 4.1.2 Preliminaries
      3. 4.1.3 Strong and weak lumpability
      4. 4.1.4 Characterization of weak lumpability
    2. 4.2 State aggregation in absorbing DTMC
      1. 4.2.1 Quasi-stationary distribution
      2. 4.2.2 Weak lumpability
      3. 4.2.3 Link with the irreducible case
    3. 4.3 State aggregation in CTMC
  10. 5 Sojourn times in subsets of states
    1. 5.1 Successive sojourn times in irreducible DTMC
    2. 5.2 Successive sojourn times in irreducible CTMC
    3. 5.3 Pseudo-aggregation
      1. 5.3.1 The pseudo-aggregated process
      2. 5.3.2 Pseudo-aggregation and sojourn times: discrete-time case
      3. 5.3.3 Pseudo-aggregation and sojourn times: continuous-time case
    4. 5.4 The case of absorbing Markov chains
      1. 5.4.1 Discrete time
      2. 5.4.2 Continuous time
      3. 5.4.3 An illustrative example
  11. 6 Occupation times of subsets of states – interval availability
    1. 6.1 The discrete-time case
    2. 6.2 Order statistics and Poisson process
    3. 6.3 The continuous-time case
  12. 7 Linear combination of occupation times – performability
    1. 7.1 Backward and forward equations
    2. 7.2 Solution
    3. 7.3 Examples
      1. 7.3.1 A two-state example
      2. 7.3.2 A three-state example
    4. 7.4 Algorithmic aspects
      1. 7.4.1 Numerical examples
  13. 8 Stationarity detection
    1. 8.1 Point availability
      1. 8.1.1 The classical uniformization method
      2. 8.1.2 Stationarity detection
      3. 8.1.3 The new algorithm
    2. 8.2 Expected interval availability analysis
      1. 8.2.1 Stationarity detection for the expected interval availability
    3. 8.3 Numerical example
    4. 8.4 Extension to performability analysis
    5. 8.5 Conclusions
  14. 9 Simulation techniques
    1. 9.1 The standard Monte Carlo method
      1. 9.1.1 Standard estimation of the MTTF
      2. 9.1.2 Standard estimation of the reliability at t, R(t)
    2. 9.2 Estimating the MTTF of a multicomponent repairable system
      1. 9.2.1 The rarity parameter
      2. 9.2.2 Importance sampling schemes
      3. 9.2.3 Balanced Importance Sampling methods
      4. 9.2.4 On the selection of the appropriate technique to use
      5. 9.2.5 Numerical illustrations
      6. 9.2.6 Conclusions
    3. 9.3 Estimating the reliability of the system at time t
      1. 9.3.1 Pathset-based conditioning
      2. 9.3.2 Numerical results
      3. 9.3.3 Conclusions
  15. 10 Bounding techniques
    1. 10.1 First part: Bounding the mean asymptotic reward
      1. 10.1.1 Related work
      2. 10.1.2 The problem
      3. 10.1.3 Auxiliary models
      4. 10.1.4 Aggregation of states
      5. 10.1.5 Bounding chain
      6. 10.1.6 A more general method
      7. 10.1.7 Illustration
      8. 10.1.8 Conclusions
    2. 10.2 Second part: Bounding the mean time to absorption
      1. 10.2.1 Exact analysis of the ECRA metric
      2. 10.2.2 Fast and slow transitions and states
      3. 10.2.3 Uniformization
      4. 10.2.4 Decomposing the cumulated reward
      5. 10.2.5 Stochastic complement
      6. 10.2.6 Bounding procedure
      7. 10.2.7 Examples
      8. 10.2.8 Conclusions
  16. References
  17. Index