Appendix A4

Testing and Goodness of Fit

In many situations, we examine data to get an estimate of unknown parameters. Examples for such parameters are the mean of a distribution, the variance of a distribution or the weights b that we apply in combining variables x into a prediction of another variable y.

In this book, we mainly employ the maximum likelihood and the least-squares estimation principles. The maximum likelihood principle is described in Appendix A3. In least squares, we choose the estimate such that the squared differences between observed values and our predictions are minimized. As an illustration, consider the case where we want to estimate the mean m of a sample of N observations xi. In the least squares approach, our prediction for a single observation will be just the mean m we are looking for, and so we minimize:

images

We can solve this problem by taking the first derivative with respect to m:

images

Solving for m yields the estimator images:

images

that is, the arithmetic average of our observed xs.

Standard errors

Once we have arrived at some estimate b, we would like to know about the ...

Get Credit Risk Modeling Using Excel and VBA with DVD now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.