Chapter 2Prior Knowledge, Parameter Uncertainty, and Estimation

Bayesian probability treats data as known conditioning information and the unknown parameters of statistical models as probability distributions. Classical statistics follows the opposite approach, treating data as random samples from a population distribution and parameters as constants known up to sampling error. While Bayesian probability regards parameter uncertainty as irreducible, classical statistics generally ignores sampling error in estimates after any interesting hypothesis tests have been conducted. An important dimension of model risk is therefore lost in the classical approach.

Bayesian parameter estimates are also distinguished by the incorporation of prior information about parameter values via a prior probability distribution. Prior information is also important for classical statistics, though it is introduced through hypothesis tests rather than prior distributions. To compare the handling of prior knowledge in the two approaches, we must examine the process of hypothesis testing closely. We will show that classical hypothesis tests are inadequate vehicles for introducing useful prior knowledge, and that their neglect of prior knowledge leads to inconsistent decisions and uncontrolled error rates. Thus, not only are classical estimates misleading as to their precision and reliability, they also fail to carry as much information as their Bayesian counterparts.

The problem of hypothesis testing ...

Get Bayesian Risk Management now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.