In Chapter 7 we have discussed output analysis, and we have seen that an obvious way to improve the accuracy of an estimate (*n*) based on a sample of *n* replications *X*_{i}, *i* = 1, …, *n*, is to increase the sample size *n*, since Var((*n*)) = Var(*X*_{i})/*n*. However, we have also seen that the width of a confidence interval, assuming that replications are genuinely i.i.d., decreases according to a square-root law involving , which is rather bad news. Increasing the number of replications is less and less effective, and this brute force strategy may result in a remarkable computational burden. A less obvious strategy is to reduce Var(*X*_{i}). At first sight, this seems like cheating, as we have to change the estimator in some way, possibly introducing bias. The variance reduction strategies that we explore in this chapter aim at improving the efficiency of Monte Carlo methods, sometimes quite dramatically, without introducing any bias. This means that, given a required accuracy, we may reduce the computational burden needed to attain it; or, going the other way around, we may improve accuracy for a given computational budget. In Chapter 9 we also consider another ...

Start Free Trial

No credit card required