Optimizing LP problems
A primer on the basics of linear programming
Note, you can also see a list of code examples, across a range of programming languages on our code examples page.
Linear Programming Basics
Linear programming (LP) is a powerful framework for describing and solving optimization problems. It allows you to specify a set of decision variables, and a linear objective and a set of linear constraints on these variables. To give a simple and widely used example, consider the problem of minimizing the cost of a selection of foods that meets all the recommended daily nutrient guidelines. The LP model would have a set of decision variables that capture the amount of each food to buy, a linear objective that minimizes the total cost of purchasing the chosen foods, and a linear constraint for each nutrient, requiring that the chosen foods together contain a sufficient quantity of that nutrient.
Using linear algebra notation, a linear program can be described as follows:
|Constraints:||A x = b (linear constraints)|
|l ≤ x ≤ u (bound constraints)|
When described in this form, the vector x represents the decision variables, the vector c captures the linear objective function, the matrix equation Ax = b specifies the linear constraints on x, and the vectors l and u give the lower and upper bounds on x.
The set of applications of linear programming is literally too long to list. It includes everything from production scheduling to web advertising optimization to clothing manufacturing. LP touches nearly every commercial industry in some way.
The first algorithm for solving linear programming problems was the simplex method, proposed by George Dantzig in 1947. Remarkably, this 65 year old algorithm remains one of the most efficient and most reliable methods for solving such problems today.
The primary alternative to the simplex method is the barrier or interior-point method. This approach has a long history, but its recent popularity is due to Karmarkar's 1984 polynomial-time complexity proof. Interior-point methods have benefited significantly from recent advances in computer architecture, including the introduction of multi-core processors and SIMD instructions sets, and are generally regarded as being faster than simplex for solving LP problems from scratch. However, the sheer variety of different LP models, and the many different ways in which LP is used, mean that neither algorithm dominates the other in practice. Both are important in computational linear programming.
Computational Linear Programming
Given the age of these algorithms (65 years for the simplex method, and 28 years for interior point methods), you might expect that the implementation issues associated with the methods would be well understood, and that different implementations would give similar results. Surprisingly, this is far from true. Computational benchmarks across a range of models show wide performance and robustness variations between different implementations. For example, the open-source simplex solvers CLP and GLPK are on average a factor of 2.5 and 58 times slower than the Gurobi simplex solver, respectively. What explains such wide disparities between implementations of such old and well-established methods? The differences primarily come down to three factors.
Sparse Linear Algebra
The first factor is sparse linear algebra. The constraint matrices that arise in linear programming are typically extremely sparse. Sparse matrices contain very few non-zero entries. It is not unusual to find constraint matrices containing only 3 or 4 non-zero values per columns of A. The steps of both the simplex and interior-point algorithms involve a number of computations with extremely sparse matrices and extremely sparse vectors. Sparse matrices must be factored, systems of sparse linear equations must be solved using the resulting factor matrices, the factor matrices must be modified, etc. It takes years of experience in sparse numerical linear algebra and linear programming to understand the computational issues associated with building efficient sparse matrix algorithms for LP.
|Dense Matrix||Sparse Matrix|
The second factor is careful handling of numerical errors. Whenever you solve systems of linear equations in finite-precision arithmetic, you will always get slight numerical errors in the results. A crucial part of building an efficient LP algorithm is to design effective strategies for managing such errors — failing to do so can mean the difference between a model solving in a fraction of a second and not solving at all.
The third factor is developing effective heuristic strategies for making the variety of choices that arise in the course of the solution process. To give one example, the simplex algorithm must repeatedly pick one variable from among many to enter the basis. The strategy used can have a profound effect on the runtime of the algorithm. Differences between the different strategies are often quite subtle, and in many cases they are simply based on empirical observations about which schemes are most effective in practice. Again, choosing effective strategies takes years of experience.
Public benchmarks of different commercial LP solvers demonstrate the effectiveness of the approaches that Gurobi has taken for each of these issues. For both the simplex and barrier methods, the Gurobi solvers provide both higher performance and better numerical robustness than competing solvers. This difference is of course relevant when you are solving LP models, but more importantly, it also provides a more solid foundation on which to build the many algorithms that rely on LP as a subroutine. One very important example is the branch-and-bound algorithm that is used for solving Mixed Integer Programming (MIP) models.
If you would like more information on these methods, we refer you to the following books:
- Bertsimas, D., and Tsitsiklis, J., Introduction to Linear Optimization, 1997.
- Vanderbei, R., Linear Programming: Foundations and Extensions, 2010
- Wright, S., Primal-Dual Interior-Point Methods, 1997.