Let's suppose that we want to obtain the full joint probability for a Bayesian network P(x1, x2, x3, ..., xN); however, the number of variables is large and there's no way to solve this problem easily in a closed form. Moreover, imagine that we would like to get some marginal distribution, such as P(x2), but to do so we should integrate the full joint probability, and this task is even harder. Gibbs sampling allows approximating of all marginal distributions with an iterative process. If we have N variables, the algorithm proceeds with the following steps:
- Initialize the variable NIterations
- Initialize a vector S with shape (N, NIterations)
- Randomly initialize x1(0), x2(0), ..., xN(0) (the superscript index is referred to ...