Try our new documentation site (beta).


Command-Line Tuning

The grbtune command-line tool provides a very simple way to invoke parameter tuning on a model (or a set of models). You specify a list of parameter=value arguments first, followed by the name of the file containing the model to be tuned. For example, you can issue the following command (in a Windows command window, or in a Linux/Mac terminal window)...

> grbtune TuneTimeLimit=10000 c:\gurobi1003\win64\examples\data\misc07

(substituting the appropriate path to a model, stored in an MPS or LP file). The tool will try to find parameter settings that reduce the runtime on the specified model. When the tuning run completes, it writes a set of .prm files in the current working directory that capture the best parameter settings that it found. It also writes the Gurobi log files for these runs (in a set of .log files).

You can also invoke the tuning tool through our programming language APIs. That will be discussed shortly.

If you specify multiple model files at the end of the command line, the tuning tool will try to find settings that minimize the total runtime for the listed models.

Running the Tuning Tool

The first thing the tuning tool does is to perform a baseline run. The parameters for this run are determined by your choice of initial parameter values. If you set a parameter, it will take the chosen value throughout tuning. Thus, for example, if you set the Method parameter to 2, then the baseline run and all subsequent tuning runs will include this setting. In the example above, you'd do this by issuing the command:

> grbtune Method=2 TuneTimeLimit=100 misc07

For a MIP model, you will note that the tuning tool actually performs several baseline runs, and captures the mean runtime over all of these trials. In fact, the tool will perform multiple runs for each parameter set considered. This is done to limit the impact of random effects on the results, as discussed earlier. Use the TuneTrials parameter to adjust the number of trials performed.

Once the baseline run is complete, the time for that run becomes the time to beat. The tool then starts its search for improved parameter settings. Under the default value of the TuneOutput parameter, the tool prints output for each parameter set that it tries...

Testing candidate parameter set 33...

        Method 2 (fixed)
        BranchDir -1
        Heuristics 0.001
        VarBranch 1
        Presolve 2

Solving with random seed #1 ... runtime 0.50s
Solving with random seed #2 ... runtime 0.60s+

Progress so far:
  baseline: mean runtime 0.76s
  best:     mean runtime 0.46s
Total elapsed tuning time 49s (51s remaining)
This output indicates that the tool has tried 33 parameter sets so far. For the 33rd set, it changed the value of the BranchDir parameter, the Heuristics parameter, the VarBranch parameter and the Presolve parameter (the Method parameter was changed in our initial parameter settings, so this change will appear in every parameter set that the tool tries and is marked to be fixed). The first trial solved the model in 0.50 seconds, while the second hit a time limit that was set by the tuning tool (as indicated by the + after the runtime output). If any trial hits a time limit, the corresponding parameter set is considered to be worse than any set that didn't hit a time limit. The output also shows that the best parameter set found so far gives a runtime of 0.46s. Finally, it shows elapsed and remaining runtime.

Tuning normally proceeds until the elapsed time exceeds the tuning time limit. However, hitting CTRL-C will also stop the tool.

When the tuning tool finishes, it prints a summary...

Tested 83 parameter sets in 98.87s

Baseline parameter set: mean runtime 0.76s

        Method 2 (fixed)

 # Name              0        1        2      Avg  Std Dev
 0 MISC07        0.85s    0.76s    0.67s    0.76s     0.07

Improved parameter set 1 (mean runtime 0.40s):

        Method 2 (fixed)
        DegenMoves 1
        Heuristics 0
        VarBranch 1
        CutPasses 5

 # Name              0        1        2      Avg  Std Dev
 0 MISC07        0.38s    0.41s    0.42s    0.40s     0.01

Improved parameter set 2 (mean runtime 0.44s):

        Method 2 (fixed)
        Heuristics 0
        VarBranch 1
        CutPasses 5

 # Name              0        1        2      Avg  Std Dev
 0 MISC07        0.42s    0.44s    0.45s    0.44s     0.01

Improved parameter set 3 (mean runtime 0.49s):

        Method 2 (fixed)
        Heuristics 0
        VarBranch 1

 # Name              0        1        2      Avg  Std Dev
 0 MISC07        0.49s    0.50s    0.49s    0.49s     0.01

Improved parameter set 4 (mean runtime 0.67s):

        Method 2 (fixed)
        VarBranch 1

 # Name              0        1        2      Avg  Std Dev
 0 MISC07        0.74s    0.58s    0.70s    0.67s     0.07

Wrote parameter files: tune1.prm through tune4.prm
Wrote log files: tune1.log through tune4.log
The summary shows the number of parameter sets it tried, and provides details on a few of the best parameter sets it found. It also shows the names of the .prm and .log files it writes. You can change the names of these files using the ResultFile parameter. If you set ResultFile=model.prm, for example, the tool would write model1.prm through model4.prm and model1.log through model4.log. For each displayed parameter set, we print the parameters used and a small summary table showing results for each model and each trial, together with the average runtime and the standard deviation.

The number of sets that are retained by the tuning tool is controlled by the TuneResults parameter. The default behavior is to keep the sets that achieve the best trade-off between runtime and the number of changed parameters. In other words, we report the set that achieves the best result when changing one parameter, when changing two parameters, etc. We actually report a Pareto frontier, so for example we won't report a result for three parameter changes if it is worse than the result for two parameter changes.

Other Tuning Parameters

So far, we've only talked about using the tuning tool to minimize the time to find an optimal solution. For MIP models, you can also minimize the optimality gap after a specified time limit. You don't have to take any special action to do this; you just set a time limit. Whenever a baseline run hits this limit, the tuning tool will automatically try to minimize the MIP gap. To give an example, the command...

> grbtune TimeLimit=100 glass4
...will look for a parameter set that minimizes the optimality gap achieved after 100s of runtime on model glass4. If the tool happens to find a parameter set that solves the model within the time limit, it will then try to find settings that minimize mean runtime.

For models that don't solve to optimality in the specified time limit, you can gain more control over the criterion used to choose a winning parameter set with the TuneCriterion parameter. This parameter allows you to tell the tuning tool to search for parameter settings that produce the best incumbent solution or the best lower bound, rather than always minimizing the MIP gap,

You can modify the TuneOutput parameter to produce more or less output. The default value is 2. A setting of 0 produces no output; a setting of 1 only produces output when an improvement is found; a setting of 3 produces a complete Gurobi log for each run performed.

If you would like to use a MIP start with your tuning run, you can include the name of the start file immediately after the model name in the argument list. For example:

> grbtune misc07.mps misc07.mst
You can also use MIP starts when tuning over multiple models; any model that is immediately followed by a start file in the argument list will use the corresponding start. For example:
> grbtune misc07.mps misc07.mst p0033.mps p0548.mps p0548.mst

Try Gurobi for Free

Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.

Evaluation License
Get a free, full-featured license of the Gurobi Optimizer to experience the performance, support, benchmarking and tuning services we provide as part of our product offering.
Academic License
Gurobi supports the teaching and use of optimization within academic institutions. We offer free, full-featured copies of Gurobi for use in class, and for research.
Cloud Trial

Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.

Search

Gurobi Optimization