Chapter 1First steps

1.1 Preliminaries

This book focuses on a class of random evolutions, in discrete time (by successive steps) on a discrete state space (finite or countable, with isolated elements), which satisfy a fundamental assumption, called the Markov property. This property can be described informally as follows: the evolution “forgets” its past and is “regenerated” at each step, retaining as sole past information for its future evolution its present state. The probabilistic description of such an evolution requires

  • a law (probability measure) for drawing its initial state and
  • a family of laws for drawing iteratively its state at the “next future instant” given its “present state,” indexed by the state space.

Such a random evolution will be called a Markov chain.

Precise definitions can be found in the Appendix, Section A.3, but we give now the probabilistic framework. A probability space c01-math-0001 will be considered throughout. When c01-math-0002 is discrete, usually its measurable structure is given by the collection of all subsets, and all functions with values in c01-math-0003 are assumed to be measurable.

A random variable (r.v.) with values in a measurable state space is a measurable function

Get Markov Chains: Analytic and Monte Carlo Computations now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.