F.2 The simplex method

The simplex method (which is not related to the simplex method from operations research) was introduced for use with maximum likelihood estimation by Nelder and Mead in 1965 [80]. An excellent reference (and the source of the particular version presented here) is Sequential Simplex Optimization by Walters, Parker, Morgan, and Deming [115].

Let x be a k x 1 vector and f(x) be the function in question. The iterative step begins with k + 1 vectors, x1,…, xk+1, and the corresponding functional values, f1,…, fk+1. At any iteration, the points will be ordered so that f2 < ··· < fk+1. When starting, also arrange for f1 < f2. Three of the points have names: x1 is called worstpoint, x2 is called secondworstpoint, and xk+1 is called bestpoint. It should be noted that after the first iteration these names may not perfectly describe the points. Now identify five new points. The first one, y1, is the center of x2,…, xk+1, that is, and is called midpoint, The other four points are found as follows:

equation

Then let g2,…, g5 be the corresponding functional values, that is, gj = f(yj) (the value at y1 is never used). The key is to replace worstpoint (x1) with one of these points. The five-step decision process proceeds as follows:

1. If f2 < g2 < fk+1, then replace it with refpoint. ...

Get Loss Models: From Data to Decisions, 4th Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.