Regulators require banks to test their value at risk (VaR) systems regularly to ensure they are in working order. We review a few common tests: the precision test, which is actually a measurement of the precision of the VaR number given a chosen method and a data set; the frequency back test, which is a requirement to ensure model “goodness” and is used by regulators to determine a bank’s multiplier for minimum capital; and the bunching test (or independence test), which checks that VaR exceedences are independent and identically distributed (i.i.d.); otherwise the VaR quantile understates what it is meant to measure. There are many more sophisticated statistical tests available for VaR; the interested reader can refer to Campbell (2005) for a good review.
Precision is an important element in any scientific measurement. For example, when one reports the weight of an item in a lab experiment as 2.5 kg ± 0.2 kg, one really means the object’s weight lies between 2.3 and 2.7 kg. So it may come as a surprise that such error bands are seldom included in VaR reports. To appreciate why, let us first see how such an error band can be computed.
Since this book favors hsVaR as a basic method, we shall illustrate a method called statistical bootstrapping, which uses empirical observations. This method does not make any prior assumptions about the shape of the distribution. It involves the following steps: