Use Benchmarks to Find the Best Solver for Your Needs

Put Our Solver to the Test

The Gurobi library consists of over 10,000 commercial models sourced from academia and our industry prospects & customers. We test each optimization we make to the solver, so we know that each new version of Gurobi is delivering meaningful, powerful performance improvements to our users. The best way to know if a solver will work for your needs is to use it. Request a free evaluation or academic license for yourself.

Try Gurobi Try Gurobi
Learn more about benchmarks

Why Benchmarks?

Benchmarking is an important aspect of evaluating a solver, and public benchmarks can certainly provide very useful perspective in your evaluation process. When looking at any benchmark test, there are some critical points to consider in order to truly understand, evaluate, and select which solver is best for you.

Gurobi and Benchmarks

We firmly believe that our software and our library are the most robust on the market—and we consistently win almost every major public benchmark test. Unfortunately, if we test competing solvers against our library, competitor licensing restrictions prevent us from publishing the results.

Proven Speed and Accuracy

Benchmark results can fluctuate over time as companies introduce new versions of their solvers. With few exceptions, the Gurobi Optimizer consistently wins in public benchmark test results, showing the:

  • Fastest times among linear programming (LP) solvers
  • Fastest times among mixed-integer programming (MIP) solvers
  • Fastest times for solving mixed-integer (QC)QP problems
  • Fastest times for detecting infeasibility

*MIPLIB 2017 Benchmark Comparison test performed by Dr. Hans Mittlemann.

Tips for evaluating benchmarks & solvers

 

Not all benchmark tests are created equal.

The industry standard is the MIPLIB 2017 benchmark set, which consists of 240 models drawn from the larger 1065-model MIPLIB library. All major solver providers agreed to this set as a valid benchmark reflecting a range of models likely to be faced in the real world by commercial users. The tests are run by Hans Mittelman at Arizona State University, and you can see the results here. Be sure that the benchmark test you’re looking at is accurate and comes from a reliable, trusted source.  

Double-check the defaults.

Because benchmark tests are usually run using a solver’s default settings, it’s important to understand what those defaults are. But because defaults are chosen to provide the best overall performance across a range of models, they’re often not optimized for a particular model. Understand benchmark tests in context of their defaults. Use them as a starting point, and ultimately test solvers against your own models.  

Dig deeper than face-value.

Some benchmark tests can be misleading – intentionally or not. If a company cherry-picks models from, for example, the broader MIPLIB library and tunes their solver for that subset of models, they may be able to claim superiority over recognized industry-leading solvers. With a deeper look, you may find that the selected model is only academic in nature and not reflective of the real world, or that tuning the opposing solver would result in a much better performance than indicated by the test parameters. Make sure the results you’re seeing aren’t being manipulated or misconstrued to appear more impressive than they are.  

Look for meaningful measures.

It’s important to determine whether a test measures something that is meaningful to you in practice. A test that measures the time required to produce poor-quality solutions isn’t relevant if your application requires high-quality solutions. Evaluate the benchmark test and the solver’s performance based on the problems and models you need to solve.  

Tune the parameters.

When testing a solver, you need the opportunity to tune performance to your specific models. Gurobi includes over 100 parameters to adjust, and an Automatic Tuning Tool that intelligently explores parameter settings and returns with advice on specific settings you can use to optimize the solver for your model(s).

Using default settings, Gurobi has the fastest out-of-the-box performance on the industry standard MIPLIB 2010 benchmark set. By using the Automatic Tuning Tool to tune the parameters for each individual model, mean performance across the models increases by 68%. Our distributed tuning capabilities show a 152% performance improvement in the same amount of tuning time.

Explore these results and more

   

Commercial Users: Free Evaluation Version Academic Users: Free Academic Version

Request a Price Quote

Please fill out this form if you’re interested in receiving a price quote. Can’t see the form? Please email sales@gurobi.com to request pricing.

Note to Academic Users: Academic users at recognized degree-granting institutions can get a free academic license. You can learn about our academic program here. 

 

Thank you! The information has been submitted successfully.