CONFIDENCE INTERVALS

If p-values are misleading, what are we to use in their place? Jones [1955, p. 407] was among the first to suggest that

an investigator would be misled less frequently and would be more likely to obtain the information he seeks were he to formulate his experimental problems in terms of the estimation of population parameters, with the establishment of confidence intervals about the estimated values, rather than in terms of a null hypothesis against all possible alternatives.

See, also, Gardner and Altman [1996] and Poole [2001].

Confidence intervals can be derived from the rejection regions of our hypothesis tests, whether the latter are based on parametric or nonparametric methods. Suppose A(θ′) is a 1 − α level acceptance region for testing the hypothesis θ = θ′, that is, we accept the hypothesis if our test statistic T belongs to the acceptance region A(θ′) and reject it otherwise. Let S(X) consist of all the parameter values θ* for which T[X] belongs to the acceptance region A(θ*). Then S(X) is an 1 − α level confidence interval for θ based on the set of observations X = {x1, x2, … , xn}.

The probability that S(X) includes θ0 when θ = θ0 is equal to Pr{T[X] ∈ A(θ0) when θ = θ0} ≥ 1 − α.

As our confidence 1 − α increases, from 90% to 95%, for example, the width of the resulting confidence interval increases. Thus, a 95% confidence interval is wider than a 90% confidence interval.

By the same process, the rejection regions of our hypothesis tests can be derived ...

Get Common Errors in Statistics (and How to Avoid Them), 4th Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.