Christopher L. Culp, Merton H. Miller and Andrea M. P. Neves
Value at risk (“VAR”) is now viewed by many as indispensable ammunition in any serious corporate risk manager's arsenal. VAR is a method of measuring the financial risk of an asset, portfolio, or exposure over some specified period of time. Its attraction stems from its ease of interpretation as a summary measure of risk and consistent treatment of risk across different financial instruments and business activities. VAR is often used as an approximation of the “maximum reasonable loss” a company can expect to realize from all its financial exposures.
VAR has received widespread accolades from industry and regulators alike.1 Numerous organizations have found that the practical uses and benefits of VAR make it a valuable decision support tool in a comprehensive risk management process. Despite its many uses, however, VAR – like any statistical aggregate – is subject to the risk of misinterpretation and misapplication. Indeed, most problems with VAR seem to arise from what a firm does with a VAR measure rather than from the actual computation of the number.
Why a company manages risk affects how a company should manage – and, hence, should measure – its risk.2 In that connection, we examine the four “great derivatives disasters” of 1993–1995 – Procter & Gamble, Barings, Orange County, and Metallgesellschaft – and evaluate how ex ante VAR measurements likely would have affected those situations. ...