SIMULATION-BASED BAYESIAN METHODS
In this chapter we investigate the idea of Bayesian estimation [1–13] using approximate sampling methods to obtain the desired solutions. We first motivate the simulation-based Bayesian processors and then review much of the basics required for comprehension of this powerful methodology. Next we develop the idea of simulation-based solutions using the Monte Carlo (MC) approach [14–21] and introduce importance sampling as a mechanism to implement this methodology from a generic perspective [22–28]. Finally, we consider the class of iterative processors founded on Markov chain concepts leading to efficient techniques such as the foundational Metropolis-Hastings approach and the Gibbs sampler [29–37].
Starting from Bayes’ rule and making assertions about the underlying probability distributions enables us to develop reasonable approaches to design approximate Bayesian processors. Given “explicit” distributions, it is possible to develop analytic expressions for the desired posterior distribution. Once the posterior is estimated, then the Bayesian approach allows us to make inferences based on this distribution and its associated statistics (e.g., mode, mean, median, etc.). For instance, in the case of a linear Gauss-Markov (GM) model, calculation of the posterior distribution leads to the optimal minimum variance solution . But again this was based completely on the assertion that the dynamic processes were strictly constrained ...