You're dealing with a particularly troublesome optimization problem where you can't seem to converge on a global optimum. For example, Solver converges to several different solutions given different initial values. You suspect Solver is converging to local optima rather than a true global optimum solution.
Getting trapped by local maxima or minima is an unfortunate part of optimization. Some problems are just riddled with local extrema over their objective function landscape and, depending on your assumed initial guess, you may end up in one of these locally optimum solutions rather than finding a globally optimum solution. One way around this is to select different initial guesses just as I discussed in Chapter 9 when finding multiple roots of a nonlinear equation. Also, if possible, you should plot the objective function to get a feel for how it behaves. This is not always possible for multidimensional problems, though.
As an alternative, you can use a Monte Carlo-inspired approach by repeatedly using Solver for many initial guesses. The following discussion shows you how.