Documentation

Thin feasible regions

We now consider another extreme situation that can lead to unexpected results. Consider the problem defined as

\begin{displaymath}\begin{array}{lrrl}
\max & y & \vec{c}=&(0,1)\\
s.t. & -x + ...
...varepsilon)\\
& -y \leq 0 & A_{3\cdot}=&(0,-1)\\
\end{array}\end{displaymath}

and its graphical representation
\scalebox{1.0}{\includegraphics[width=4in]{refman_misc/codedraw7.png}}

It is clear from the graphical representation that the optimal solution for the problem will be at the intersection of constraints <span>$</span>A_{1\cdot}<span>$</span> with <span>$</span>A_{2\cdot}<span>$</span>; and if we do the algebra, we will get that <span>$</span>x^*=(0,\frac{1}{\varepsilon})<span>$</span>. Also note that as you decrease <span>$</span>\varepsilon<span>$</span> the feasible region stretches upwards, leaving its base unchanged. We will consider the case where <span>$</span>\varepsilon<span>$</span> is a very small, positive number (between <span>$</span>10^{-9}<span>$</span> and <span>$</span>10^{-6}<span>$</span>).

If we perturb the right-hand side vector <span>$</span>b<span>$</span> from <span>$</span>(1, 1)<span>$</span> to <span>$</span>(1+\delta,1)<span>$</span>, the new solution will be <span>$</span>\tilde{x}^*=(-\frac{\delta}{2},\frac{2+\delta}{2\varepsilon})<span>$</span>. To assess the impact of this perturbation, we compute the <span>$</span>L_1<span>$</span> distance between this modified solution and the previous solution, which is given by

\begin{displaymath}\Vert x^*-\tilde{x}^*\Vert _1 = \frac{\vert\delta\vert}{2}+\frac{\vert\delta\vert}{\varepsilon}\end{displaymath}

This quantity can be either small or very large, depending on the relative magnitude between <span>$</span>\delta<span>$</span> and <span>$</span>\varepsilon<span>$</span>. If <span>$</span>\delta<span>$</span> is much smaller than <span>$</span>\varepsilon<span>$</span>, then this quantity will be small. However, if <span>$</span>\delta<span>$</span> is larger than or even the same order of magnitude as <span>$</span>\varepsilon<span>$</span>, the opposite will be true. Very small perturbations in the input data can lead to big changes in the optimal solution.

A similar issue arises if we perturb <span>$</span>A_{1\cdot}<span>$</span> to <span>$</span>(-1,\delta)<span>$</span>; the new optimal solution becomes <span>$</span>\tilde{x}^*=(1-\frac{2\varepsilon}{\varepsilon+\delta},\frac{2}{\varepsilon+\delta})<span>$</span>. But now, if <span>$</span>\delta=\varepsilon/2<span>$</span>, then the new solution for <span>$</span>y<span>$</span> will change from <span>$</span>\frac{1}{\varepsilon}<span>$</span> to <span>$</span>\frac{4}{3\varepsilon}<span>$</span> (a 33% relative difference). Again, small changes in the input can produce big changes in the optimal solution.

What is driving this bad behavior? The problem is that the optimal point is defined by two constraints that are nearly parallel. The smaller <span>$</span>\varepsilon<span>$</span> is, the closer to parallel the are. When the constraints are so close parallel, small changes in the slopes can lead to big movements in the point where they intersect. Mathematically speaking:

\begin{displaymath}\lim_{\varepsilon\rightarrow0^+}\Vert x^*\Vert=\infty\end{displaymath}

Note however that, if the original problem had an additional variable bound of the form <span>$</span>y \leq 10^4<span>$</span>, then neither of these bad behavior would have been possible. For any <span>$</span>\varepsilon<span>$</span> value smaller than <span>$</span>10^{-4}<span>$</span>, the optimal point would be defined by the new constraint and one of the constraints <span>$</span>A_{2\cdot}<span>$</span> or <span>$</span>A_{1\cdot}<span>$</span>, which would lead again to a well-behaved (i.e. stable) solutions. In summary, this sort of issue can only arise when either the feasible region is either unbounded or very large. See the Scaling section for further guidance on bounding the feasible region.