We all know that implementing optimization models is nothing like “normal” software development. You cannot define a simple to-do list upfront and execute the tasks according to a strict schedule. You need creative freedom, hours of research, and many failed attempts. It takes courage, creativity, and perseverance. Your colleagues should not ask for a plan. In fact, they should just leave you alone during this tough period until you tell them, “It’s done.” Optimization is an art. There is no other way. Right?

I have worked with many teams over the years, each with its own way of working when it comes to implementing optimization in practice. Some share the mindset above, while others take a different stance. Having seen the full spectrum, I tend to side with the ones applying best practices from other disciplines. In this article, I will show what that looks like in practice.

 

Step 1: Preparation

Before writing a single line of code, there are three things you want to have readily on hand.

  • Write down the mathematical model.
    Tools like Visual Studio Code and Word do a great job supporting mathematical notation. In fact, using a whiteboard or a piece of paper can do the job when you’re eager to get coding. The model will help you break down the work into smaller chunks. It will also be useful for debugging purposes later.
  • Collect at least one realistic dataset.
    You will obviously need this for testing. But it also helps you compare your solution to the status quo, both in terms of solution quality and runtime. It helps you validate whether your model captures the right level of detail. And it’s a great starting point if you want to demonstrate your solution to other people. Avoid using randomly generated data, since solver performance may be very different on such datasets.
  • Identify the core of your model.
    In other words, identify the smallest part of your model formulation which would still make practical sense. Note that this can be much smaller than a “minimum viable product”; you can leave out many details that are important to execute the generated solutions in practice but are not required in a first iteration. In workforce planning, you will need constraints to ensure each shift is assigned to one person, but you don’t need labor rules yet. In supply chain planning, you will need inventory balance constraints, but demand prioritization can be added later.

 

Step 2: Implementation

Time to get cooking! Instead of implementing your complete model formulation before running anything, try taking an iterative approach. It takes a bit of discipline but is more rewarding in the long run.

  • Implement one objective or constraint at a time, introducing helper variables only when needed.
    It might be really tempting to add everything from the start since you have the formulation already, but both the formulation and your code might contain mistakes. And mistakes are easier to diagnose and solve if you introduce them one at a time. For example, in power generation optimization, you could leave the startup and shutdown cost with corresponding helper variables for later.
  • Make sure you can always run your code.
    I have often helped people debug their code. The answer to my first question, “Can you run it for me, top to bottom?” is often, “Well, there are a few issues that we’re working on, so I need to change a few things here and skip a few steps there first.” Try getting into the habit of going from one working iteration to the next. Version control and automated testing can make a huge difference here.
  • Design your model like a tiny black box: one function (or class) that turns problem data into solution data.
    Keep preprocessing and postprocessing outside that function whenever it does not involve the model. Hide the modeling objects from the outside world. This approach keeps your code clean (separation of concerns) and makes it easier to write tests. Have a look at our library of OptiMods for examples of this approach.

 

Step 3: Validation

The reason for applying an iterative approach is that you can validate your work continuously. The earlier you can do this, the more likely you are to find mistakes at a time when it’s still relatively easy to solve them. Implementing (or worse, deploying) a large optimization model at record speed is not that rewarding anymore if one bug after another surfaces. Here’s how to avoid that:

  • Make sure to visualize the decisions your model takes.
    It may seem tempting to focus on objective values to justify the correctness and value of your model: if the numbers improve, you must be doing something right. But that comparison is only useful when all solutions involved are feasible with respect to the exact same set of constraints. Instead, find a simple visualization of the decisions. For example, if you’re assigning shifts to employees, show a simple grid with employees and days on the axes. This will immediately help you identify issues with constraints like the maximum number of consecutive shifts. If you’re solving a vehicle routing problem, output the detailed schedule with stops and timing for each vehicle. Jupyter notebooks with Pandas and Plotly, or a simple Streamlit test application, are great ways to add visualization for testing purposes quickly.
  • Adding automated tests using frameworks like unittest is a great way to ensure your code runs smoothly after any future change.
    It also forces you to think carefully about your model formulation. Try creating small datasets (ideally with sets of only one or two elements each) that test one feature. Going back to the shift assignment problem, you would typically have many labor rules that dictate whether a certain combination of shifts can be assigned to one employee. For each of them, you could create a tiny dataset with one employee and a set of shifts that violates the rule at hand. Then assert that your model is infeasible. As a second test, add a second employee and ensure that shifts are distributed between the two employees.
  • Focus on correctness over performance.
    There’s no point in improving the performance of something that doesn’t give correct results anyway. And as you extend your model, performance may get better or worse—and you will only know when you try it. For the same reason, our Experts team often suggests reconsidering solver parameters when changes to the model have been made—since what used to work before might not give the expected behavior anymore.
  • Ask for feedback.
    As your users see the model in action, they will often provide feedback that might change the formulation. Some constraints might be missing (“that schedule can’t be executed in practice”), or you might be overcomplicating things (“you would never see that kind of input data”).

 

The ideas above are not unique—in fact, they are being applied in many software development teams working on topics including (but definitely not limited to) optimization. They are not a guarantee for success either; most likely after completing the last part of your formulation, you will need to start looking into performance, code refactoring, and additional tests. But the next time you hear about best practices “out there,” ask yourself whether we, as operations research practitioners, are really that different. There’s always something to be learned.

Ronald van der Velden
AUTHOR

Ronald van der Velden

Technical Account Manager – EMEAI

AUTHOR

Ronald van der Velden

Technical Account Manager – EMEAI

Ronald van der Velden holds a MSc degree in Econometrics and Operations Research at the Erasmus University in Rotterdam. He started his career at Quintiq where he fulfilled various roles ranging from creating planning and scheduling models as a software developer, to business analysis and solution design at customers worldwide, as well as executing technical sales activities like value scans and "one week demo challenges". He also spent two years as a lead developer at a niche company focused on 3D graphics in the entertainment industry before going back to his mathematical roots at Gurobi. In his spare time he loves spending time with his wife and two sons, going for a run on the Veluwe and working on hobby software projects.

Ronald van der Velden holds a MSc degree in Econometrics and Operations Research at the Erasmus University in Rotterdam. He started his career at Quintiq where he fulfilled various roles ranging from creating planning and scheduling models as a software developer, to business analysis and solution design at customers worldwide, as well as executing technical sales activities like value scans and "one week demo challenges". He also spent two years as a lead developer at a niche company focused on 3D graphics in the entertainment industry before going back to his mathematical roots at Gurobi. In his spare time he loves spending time with his wife and two sons, going for a run on the Veluwe and working on hobby software projects.

Guidance for Your Journey

30 Day Free Trial for Commercial Users

Start solving your most complex challenges, with the world's fastest, most feature-rich solver.

Always Free for Academics

We make it easy for students, faculty, and researchers to work with mathematical optimization.

Try Gurobi for Free

Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.

Evaluation License
Get a free, full-featured license of the Gurobi Optimizer to experience the performance, support, benchmarking and tuning services we provide as part of our product offering.
Academic License
Gurobi supports the teaching and use of optimization within academic institutions. We offer free, full-featured copies of Gurobi for use in class, and for research.
Cloud Trial

Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.

Search