Try our new documentation site (beta).


Concurrent Optimizer

Concurrent optimization is a simple approach for exploiting multiple processors. It starts multiple, independent solves on a model, using different strategies for each. Optimization terminates when the first one completes. By pursuing multiple different strategies simultaneously, the concurrent optimizer can often obtain a solution faster than it would if it had to choose a single strategy.

Concurrent optimization is our default choice for solving LP models, and a user-selectable option for solving MIP models. The concurrent optimizer can be controlled in a few different ways. These will be discussed in this section. To avoid confusion when reporting results from multiple simultaneous solves, we've chosen to produce simplified logs and callbacks when performing concurrent optimization. These will also be discussed in this section.

Controlling Concurrent Optimization

If you wish to use the concurrent optimizer to solve your model, the steps you need to take depend on the model type. As mentioned earlier, the concurrent optimizer is the default choice for LP models. This choice is controlled by the Method parameter. For MIP models, you can select the concurrent optimizer by modifying the ConcurrentMIP parameter.

For LP models, the strategies used by the independent solvers can be controlled with the ConcurrentMethod parameter. While we reserve the right to change our choices in the future, with the default setting of ConcurrentMethod=-1 we currently devote the first concurrent thread to dual simplex, the second through fourth to a single parallel barrier solve, and the fifth to primal simplex. Additional threads are devoted to the parallel barrier solve. Thus, for example, a concurrent LP solve using four threads would devote one thread to dual simplex and three to parallel barrier.

For MIP, we divide available threads evenly among the independent solves, and we choose different values for the MIPFocus and Seed parameters for each.

If you want more control over concurrent optimization (e.g., to choose the exact strategies used for each independent solve), you can do so by creating two or more concurrent environments. These can be created via API routines (in C, C++, Java, .NET, or Python), or they can be created from .prm files using the ConcurrentSettings parameter if you are using our command-line interface. Once these have been created, subsequent optimization calls will start one independent solve for each concurrent environment you created. To control the strategies used for each solve, you simply set the parameters in each environment to the values you would like them to take in the corresponding solve. For example, if you create two concurrent environments and set the MIPFocus parameter to 1 in the first and 2 in the second, subsequent MIP optimize calls will perform two solves in parallel, one with MIPFocus=1 and the other with MIPFocus=2.

Please note that parameter values are copied from the model when the concurrent environment is created. Therefore, changes to parameter values on the model have no effect on concurrent environments that have already been created. This is a frequent source of confusion.

Logging

Your first indication that the concurrent optimizer is being used is output in the Gurobi log that looks like this...

Concurrent LP optimizer: dual simplex and barrier
Showing barrier log only...
...or like this...
Concurrent MIP optimizer: 2 concurrent instances (2 threads per instance)
These log lines indicate how many independent solves will be launched. For the LP case, the lines also indicate which methods will be used for each.

Since it would be quite confusing to see results from multiple solves interleaved in a single log, we've chosen to use a simplified log format for concurrent optimization. For concurrent LP, we only present the log for a single solve. For concurrent MIP, the log is similar to our standard MIP log, except that it only provides periodic summary information (see the MIP logging section if you are unfamiliar with our standard MIP log). Each concurrent MIP log line shows the objective for the best feasible solution found by any of the independent solves to that point, the best objective bound proved by any of the independent solves, and the relative gap between these two values:

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0      -      -      -   24.00000   13.00000  45.8%          0s
     0     0      -      -      -   16.50000   13.21154  19.9%          0s
     0     0      -      -      -   16.50000   13.25000  19.7%          0s
     0     0      -      -      -   16.50000   13.37500  18.9%          0s
     0     0      -      -      -   16.50000   13.37500  18.9%          0s
     0     0      -      -      -   16.50000   13.37500  18.9%          0s
     0     6      -      -      -   15.50000   13.37500  13.7%          0s
   310   149      -      -      -   15.00000   13.66923  8.87%          0s
  3873  1634      -      -      -   15.00000   14.00000  6.67%          5s
  9652  4298      -      -      -   15.00000   14.12500  5.83%         10s
 16535  6991      -      -      -   15.00000   14.18056  5.46%         15s
 23610  9427      -      -      -   15.00000   14.22333  5.18%         20s
...
We also include node counts from one of the independent solves, as well as elapsed times, to give some indication of forward progress.

Determinism

Concurrent optimization essentially sets up a race between multiple threads to solve your model, with the winning thread returning the solution that it found. In cases where multiple threads solve the model in roughly the same amount of time, small variations in runtime from one run to the next could mean that the winning thread is not the same each time. If your model has multiple optimal solutions (which is quite common in LP and MIP), then it is possible that running a concurrent solver multiple times on the same model could produce different optimal solutions. This is known as non-deterministic behavior.

By default, the Gurobi concurrent solvers all produce non-deterministic behavior. You can obtain deterministic behavior for the concurrent LP solver by setting the Method parameter to value 4. This setting typically increases runtimes slightly, but if your application is dependent on deterministic behavior, deterministic concurrent LP is often your best option. There is no similar setting for the concurrent MIP solver.

Callbacks

Rather than providing callbacks from multiple independent solves simultaneously, we've again chosen to simplify behavior for the concurrent optimizer. In particular, we only supply callbacks from a single solve. A few consequences of this choice:

  • Information retrieved by your callback (solutions, objective bounds, etc.) will come from a single model.
  • User cutting planes are only applied to a single model.
  • You aren't allowed to use lazy constraints with concurrent MIP, since they would only be applied to one model.

Callbacks are further restricted for multi-objective problems. While you would normally get both multi-objective-specific callbacks and callbacks from the algorithm that solves the model associated with each phase of the computation, you only get multi-objective-specific callbacks (and only from a single solve) for concurrent multi-objective solves.

Try Gurobi for Free

Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.

Evaluation License
Get a free, full-featured license of the Gurobi Optimizer to experience the performance, support, benchmarking and tuning services we provide as part of our product offering.
Academic License
Gurobi supports the teaching and use of optimization within academic institutions. We offer free, full-featured copies of Gurobi for use in class, and for research.
Cloud Trial

Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.

Search

Gurobi Optimization