17Brief on Type I Versus Type II Errors

img

Note: You may want to reread lesson 1 at this point.

Imagine testing many, many different coins, where their balance is unknown and can be different from coin to coin. Type I error is mistakenly rejecting the null hypothesis when it is actually true. The probability threshold we use for type I error is alpha. When testing a coin for fairness, type I error would be judging a fair coin to be unfair. Type I error is relatively straightforward to control in practice because we set the alpha-level and we are only concerned with the fate of the fair coins.

Type II error is mistakenly not rejecting the null hypothesis when it is actually false. With a coin, that would be judging an unfair coin to be fair. The probability for type II error is called beta. We can easily determine beta if we want to test for a certain specific degree of unfairness such as a 30% chance of heads. However, it is difficult to estimate beta in general because then we are concerned with all the variously unbalanced unfair coins. You can imagine that for unfair coins that have a 51% chance of coming up heads, then we'll make many type II errors. On the other hand, for unfair coins that have a 1% chance of coming up heads, then we'll make very few type II errors. What level of unfairness are we dealing with for each coin? We often don't know for sure. We have to make specific ...

Get Illuminating Statistical Analysis Using Scenarios and Simulations now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.