Try our new documentation site (beta).

Dealing with epsilon-optimal solutions

The previous section considered the case of multiple (true) optimal solutions. What happens when we have several <span>$</span>\varepsilon<span>$</span>-optimal solutions? To be more specific, consider

\max & \varepsilon x + y & \vec{c}=&(\var...}=&(0,-1)\
& y \leq 1 & A_{4\cdot}=&(0,1).\

Graphically this can be depicted as
Image codedraw6
If <span>$</span>\varepsilon<span>$</span> is zero, then we are in the situation described before. Note, however, that a small perturbation of the objective vector may lead to either <span>$</span>x^1<span>$</span> or <span>$</span>x^2<span>$</span> being reported as optimal. And tolerances can play a big role here. If <span>$</span>\varepsilon<span>$</span> is negative, for example, then <span>$</span>x^1<span>$</span> would be the mathematically optimal result, but due to the optimality tolerance, simplex might conclude that <span>$</span>x^2<span>$</span> is optimal. More precisely, if <span>$</span>\varepsilon<span>$</span> is less than the default optimality tolerance of <span>$</span>10^{-6}<span>$</span>, then simplex is free to declare either solution optimal (within tolerances).

The above statement is true whenever the distance between <span>$</span>x^1<span>$</span> and <span>$</span>x^2<span>$</span> is not too large. To see this, consider what happens when we change the right-hand side of <span>$</span>A_{4\cdot}<span>$</span> from 1 to <span>$</span>10^6<span>$</span>. Then the feasible region would then be a very long rectangular box, with vertices <span>$</span>(0,0)<span>$</span>, <span>$</span>(0,1)<span>$</span>, <span>$</span>(10^6,1)<span>$</span> and <span>$</span>(10^6,0)<span>$</span>. Perhaps somewhat surprisingly, if <span>$</span>\varepsilon<span>$</span> is below the dual tolerance, simplex may consider <span>$</span>(10^6,1)<span>$</span> optimal, even though its objective value is <span>$</span>1-10^6\varepsilon<span>$</span>, which can be very relevant in terms of the final objective value.

Note that both situations share one ingredient: The objective function is (almost) parallel to one of the sides of the feasible region. In the first case, this side is relatively short, and thus jumping from <span>$</span>x^2<span>$</span> to <span>$</span>x^1<span>$</span> translate into a small change in objective value. In the second case, the side almost parallel to the objective function is very long, and now the jump from <span>$</span>x^2<span>$</span> to <span>$</span>x^1<span>$</span> can have a significant impact on the final objective function.

If you take out either of these two ingredients, namely the objective vector being almost parallel to a constraint, or the edge induced by this nearly-parallel constraint being very long, then this problem can not arise. For the reasons discussed at the beginning of this section, it is common for the objective function to be close to parallel to one or more constraints. Thus, the best way to avoid this situation is to avoid the second condition. The simplest way to do this is to ensure that the ranges for your variables are not too large. Please refer to the Scaling section for guidance on this.

Try Gurobi for Free

Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.

Evaluation License
Get a free, full-featured license of the Gurobi Optimizer to experience the performance, support, benchmarking and tuning services we provide as part of our product offering.
Academic License
Gurobi supports the teaching and use of optimization within academic institutions. We offer free, full-featured copies of Gurobi for use in class, and for research.
Cloud Trial

Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.