Gibbs sampling

Let's suppose that we want to obtain the full joint probability for a Bayesian network P(x1, x2, x3, ..., xN); however, the number of variables is large and there's no way to solve this problem easily in a closed form. Moreover, imagine that we would like to get some marginal distribution, such as P(x2), but to do so we should integrate the full joint probability, and this task is even harder. Gibbs sampling allows approximating of all marginal distributions with an iterative process. If we have N variables, the algorithm proceeds with the following steps:

  1. Initialize the variable NIterations
  2. Initialize a vector S with shape (N, NIterations)
  3. Randomly initialize x1(0)x2(0), ..., xN(0) (the superscript index is referred to ...

Get Mastering Machine Learning Algorithms now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.