6

ESTIMATION

In Section 3.8, we modeled the number of home runs with a Poisson distribution. There, we used the sample mean as an estimate of the parameter λ, the expected value of the Poisson random variable. Elsewhere, we have used the sample mean as an estimate for the true population mean and the sample proportion as an estimate of the true population proportion. Intuitively, these all seem to be very reasonable choices. In this chapter, we will consider more rigorously the nature of these choices and consider some general procedures for estimating parameters. In addition, we will consider properties that we would want an estimator to have.

6.1 MAXIMUM LIKELIHOOD ESTIMATION

We begin with a very general procedure, maximum likelihood estimation (MLE). This has a number of nice properties, of which we will mention one now: the answers it gives are reasonable, typically giving the answers that just make sense, and it never gives impossible answers.

Suppose a friend tells you she has a total of 25 chocolate chip or oatmeal raisin cookies in a bag. She also tells you that the number of chocolate chip cookies is either 2 or 20. If you draw out a cookie at random from the bag and see it is chocolate chip, what do you think is more likely the truth, that there are 2 or 20 chocolate cookies? Based solely on your one data point (the chocolate chip cookie), it seems that 20 chocolate cookies is more likely than 2 chocolate cookies. This is the idea behind maximum likelihood estimation—what ...

Get Mathematical Statistics with Resampling and R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.