Distributed optimization lets you leverage multiple machines to reduce solve times. Some optimization models solve 15 times faster with 32 machines, and speed-ups of 2-3x are common with eight machines.

- Distributed MIP — where multiple machines work together to solve a single MIP model
- Distributed concurrent — where multiple machine use different algorithmic strategies in a race to solve an LP or MIP model
- Distributed tuning — where multiple machines do experimental solves to find parameter settings that improve performance

All three distributed algorithms are easy to use. Simply install Gurobi Remote Services on the set of machines that you wish to use as distributed workers, then use the same Gurobi commands that you would use if you were going to perform optimization or tuning on just a single machine. A Gurobi parameter allows you to specify the names of the worker machines that are available for running distributed algorithms, and another parameter indicates how many jobs to start on these workers. The Gurobi Optimizer handles all the work of dividing the computation among the machines.

The speed-up achieved by distributed optimization varies depending on the model. Read on to understand when to use distributed optimization.

Distributed Optimization | |||
---|---|---|---|

Distributed MIP | Distributed Concurrent | ||

Distributed Tuning |

The distributed MIP solver divides the work of solving a single MIP model among multiple machines. A manager machine sends different portions of the MIP search tree to each worker machine to solve, and it periodically rebalances the tree to put useful load on all the workers.

Performance testing demonstrates the benefits of distributed MIP. For models in the MIPLIB 2010 benchmark set that take more than 100 seconds to solve, distributed MIP on 8 machines produces a mean improvement of 2.87X. Distributed MIP can also dramatically reduce solve times on some of the most difficult models. For example, model seymour, a set covering model in MIPLIB 2010, takes 9,267 seconds to solve on a single machine. Distributed MIP on 32 machines reduces the solve time to 633 seconds -- a 14.6x speed-up.

Distributed MIP works best on difficult models with large search trees. These models have enough work to ensure all the workers stay busy. Easier models that solve at the root won’t benefit from the extra workers.

The distributed concurrent solver uses a simple approach to take advantage of multiple machines. It starts an independent solve using a different strategy on each machine. The machines then race to see which can solve the model first. The solve is done when the first machine crosses the finish line. By trying different strategies at the same time, the concurrent optimizer can often find a solution faster than if it had to choose a single strategy.

Performance testing demonstrates the benefits of the distributed concurrent solver. In our testing, using 2 machines produces mean improvements of 1.1X over a broad set of difficult LP models; using five machines produces mean improvements of 1.7X over a broad set of difficult MIP models.

Distributed concurrent is particularly effective on models whose solve times vary substantially depending on the data used to define the model or the algorithm used to solve it. Trying different strategies on different machines increases robustness and can smooth out variations in solve times.

Gurobi includes a parameter tuning tool that automatically searches for parameter settings that improve optimization performance on a model or a set of models. This tuning tool has proven to be both quite effective and extremely popular.

One downside of the tuning tool is the time it can take. Automated tuning performs many solves on your model, each using different parameter settings. By default, this search for improved parameter settings takes roughly 10 times as long as solving the model (and it is often beneficial to let the tuning tool run for much longer).

The distributed tuning tool uses multiple, independent machines to perform tuning, thus allowing you to explore more candidate parameter settings in the same amount of time.

Internal benchmark tests show mean performance improvements of over 2X when using tuned settings, as compared with default parameter settings.

The distributed algorithms require a set of machines to serve as distributed workers. These algorithms work best on identical machines, but it is fine if the machines vary slightly. Distributed MIP works best on 8-32 machines, distributed concurrent is most effective on 2-16 machines, and distributed tuning can take advantage of all available machines.

You can use any of the distributed algorithms by adding the distributed capability to an existing named user, single machine, or compute server license. A license is only required for the manager machine that launches and coordinates workers. Worker machines do not need separate licenses; they only need Gurobi Remote Services installed.

Do you want to know if your models can benefit from distributed optimization? Simply send us your models as MPS or LP files. We will run them on a cluster of machines and tell you if:

- distributed MIP solves your models faster
- distributed concurrent decreases variations in solve times
- distributed tuning finds parameter settings that speed up single machine solve

- Contact Gurobi Sales for our free benchmarking and tuning service ➤
- Visit our Download Center to request a free evaluation version ➤

You may also download our Gurobi Distributed Optimization brochure.