Part I

Introduction

The theory of statistical hypothesis testing was basically founded one hundred years ago by the Britons Ronald Aylmer Fisher, Egon Sharpe Pearson, and the Pole Jerzy Neyman. Nowadays it seems that we have a unique test theory for testing statistical hypothesis, but the opposite is true. On one hand Fisher developed the theory of significance testing and on the other hand Neyman and Pearson the theory of hypothesis testing.

Whereas with the Fisher theory the formulation of a null hypothesis is enough, Neyman's and Pearson's theory demands alternative hypotheses as well. They open the door to calculating error probabilities of two kinds, namely of a false rejection (type I error) and of a false acceptance (type II error) of the null hypothesis. This leads to the well known Neyman–Pearson lemma which helps us to find the best critical region for a hypothesis test with a simple alternative. The largest difference of both schools, however, are the Fisherian measure of evidence (p-value) and the Neyman–Pearson error rate (c0x-math-001).

With the Neyman–Pearson theory the error rate c0x-math-002 is fixed and must be defined before performing the test. Within the Fisherian context the p-value is calculated from the value of the test statistic as a quantile of the test statistic distribution and ...

Get Statistical Hypothesis Testing with SAS and R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.