Documentation

Dealing with big-M constraints

Big-M constraints are a regular source of instability for optimization problems. They are so named because they typically involve a large coefficient $M$ that is chosen to be larger than any reasonable value that a continuous variable or expression may take. Here's a simple example:

\begin{eqnarray*}
x&\leq&10^6y\\
x&\geq&0\\
y&\in& \{0,1\},
\end{eqnarray*}


Big-M constraints are typically used to propagate the implications of a binary, on-off decision to a continuous variable. For example, a big-M might be used to enforce the condition that an edge can only admit flow if you pay the fixed charge associated with opening the edge, or a facility can only produce products if you build it. In our example, note that the $y = 0.0000099999$ satisfies the default integrality tolerance (IntFeasTol=$10^{-5}$), which allows $x$ to take a value of $9.999$. In other words, $x$ can take a positive value without incurring an expensive fixed charge on $y$, which subverts the intent of only allowing a non-zero value for $x$ when the binary variable $y$ has the value of 1. You can reduce the effect of this behavior by adjusting the IntFeasTol parameter, but you can't avoid it entirely.

However, if the modeler has additional information that the $x$ variable will never be larger than $10^3$, then you could reformulate the earlier constraint as:

\begin{eqnarray*}
x&\leq&10^3y\\
x &\geq& 0\\
y &\in & \{0,1\}
\end{eqnarray*}


And now, $y = 0.0000099999$ would only allow for $x \leq 0.01$.

For cases when it is not possible to either rescale variable $x$ or tighten its bounds, an SOS constraints or an indicator constraint (of the form $y = 0 \Rightarrow x = 0$) may produce more accurate solutions, but often at the expense of additional processing time.