Try our new documentation site (beta).

Working With Multiple Objectives

Of course, specifying a set of objectives is only the first step in solving a multi-objective optimization problem. The next step is to indicate how the objectives should be combined. As noted earlier, we support two approaches: blended and hierarchical.

Blended Objectives

A blending approach creates a single objective by taking a linear combination of your objectives. You provide a weight for each objective as an argument to setObjectiveN. Alternatively, you can use the ObjNWeight attribute, together with ObjNumber. The default weight for an objective is 1.0.

To give an example, if your model has two objectives, <span>$</span>1 + x + 2y<span>$</span> and <span>$</span>y
+ 2z<span>$</span>, and if you give weights of <span>$</span>-1<span>$</span> and <span>$</span>2<span>$</span> to them, respectively, then Gurobi would solve your model with a blended objective of <span>$</span>-1 \cdot (1 + x + 2y) + 2 \cdot (y + 2z) = -1 - x + 4z<span>$</span>.

You should avoid weights that are very large or very small. A very large weight (i.e., larger than <span>$</span>10^6<span>$</span>) may lead to very large objective coefficients, which can cause numerical difficulties. A very small weight (i.e., smaller than <span>$</span>1e-6<span>$</span>) may cause the contribution from that objective to the overall blended objective to be smaller than tolerances, which may lead to that objective being effectively ignored.

Hierarchical Objectives

A hierarchical or lexicographic approach assigns a priority to each objective, and optimizes for the objectives in decreasing priority order. During each of these passes, it finds the best solution for the current objective, but only from among those that would not degrade the solution quality for higher-priority objectives. You provide the priority for each objective as an argument to setObjectiveN. Alternatively, you can use the ObjNPriority attribute. Priorities are integral, not continuous. Larger values indicate higher priorities. The default priority for an objective is 0.

To give an example, if your model has two objectives, with priorities <span>$</span>10<span>$</span> and <span>$</span>5<span>$</span>, and objective weights 1.0 and -1.0. Assuming the optimal solution for the first objective has value <span>$</span>100<span>$</span>, then the solver will find the solution that optimizes <span>$</span>-1<span>$</span> times the second objective from among all solutions with objective <span>$</span>100<span>$</span> for the first objective.

Allowing Multiple-Objective Degradation

By default, our hierarchical approach won't allow later objectives to degrade earlier objectives, subject to the user-given ending gap conditions for the optimization problem. More precisely, the base value used to define what solutions are acceptable for lower priorities objectives – for a minimization problem – is computed as:

<span>$</span>\mathrm{base\_value} = \max\{bestsol, bestbound + \vert bestsol\vert*rgap,
bestbound + agap\},
<span>$</span>

where bestsol is the value of the best incumbent solution, bestbound is the value of the best proven lower bound for the problem, rgap is the relative MIP gap, and agap is the absolute MIP gap, and the set of feasible solutions for the next objective will consider solutions whose objective value is at most that value.

This behavior can be relaxed for MIPs through a pair of tolerances: a relative and an absolute tolerance. These are provided as arguments to setObjectiveN, or they can be set using attributes ObjNRelTol and ObjNAbsTol. By setting one of these for a particular objective, you can indicate that later objectives are allowed to degrade this objective by the specified relative or absolute amount, respectively. In our earlier example, if the optimal value for the first objective is <span>$</span>100<span>$</span>, and if we set ObjNAbsTol for this objective to <span>$</span>20<span>$</span>, then the second optimization pass would find the best solution for the second objective from among all solutions with objective <span>$</span>120<span>$</span> or better for the first objective. Note that if you modify both tolerances, later optimizations would use the looser of the two values (i.e., the one that allows the larger degradation).

Objective degradations are handled differently for multi-objective LP models. For LP models, solution quality for higher-priority objectives is maintained by fixing some variables to their values in previous optimal solutions. These fixings are decided using variable reduced costs. The value of the ObjNAbsTol parameter indicates the amount by which a fixed variable's reduced cost is allowed to violate dual feasibility, whereas the ObjNRelTol parameter is simply ignored. If you want the MIP behavior, where the degradation is controlled more directly, you can add a dummy binary variable to the model, thus transforming it into a MIP. Solving the resulting multi-objective MIP will be much more time consuming than solving the original multi-objective LP.

Combining Blended and Hierarchical Objectives

Every objective in a multi-objective model has both a weight and a priority, which allows you to seamlessly combine blended and hierarchical approaches. To understand how this works, we should first provide more detail on how hierarchical objectives are handled.

When you specify a different priority for each of <span>$</span>n<span>$</span> objectives, the solver performs <span>$</span>n<span>$</span> separate optimization passes. In each pass, in decreasing priority order, it optimizes for the current objective multiplied by its ObjNWeight attribute, while imposing constraints that ensure that the quality of higher-priority objectives isn't degraded by more than the specified tolerances.

If you give the same priority to multiple objectives, then they will be handled in the same optimization pass, resulting in fewer than <span>$</span>n<span>$</span> total passes for <span>$</span>n<span>$</span> objectives. More precisely, one optimization pass is performed per distinct priority value, in order of decreasing priority, and all objectives with the same priority are blended together, using the weights for those objectives. This gives you quite a bit of flexibility when combining the blended and hierarchical approaches.

One subtle point when blending multiple objectives within a single level in a hierarchical approach relates to the handling of degradations from lower-priority levels. The objective degradation allowed after a blended optimization pass is the maximum absolute and relative degradations allowed by each of the participating objectives. For example, if we have three objectives with ObjNPriority equal to <span>$</span>\{2, 2, 1\}<span>$</span>, and ObjNRelTol equal to <span>$</span>\{0.10, 0.05, 0.00\}<span>$</span> and ObjNAbsTol equal to <span>$</span>\{0,
1, 2\}<span>$</span>, and if the best solution for the first priority objective is <span>$</span>10<span>$</span>, then the allowed degradation for the first priority objective is <span>$</span>\max\{10 \cdot 0.10, 10 \cdot 0.05, 0, 1\}~=~1<span>$</span>.

Querying multi-objective results

Once you have found one or more solutions to your multi-objective model, you can query the achieved objective value for each objective on each solution. Specifically, if you set the ObjNumber parameter to choose an objective, and the SolutionNumber parameter to choose a solution, then the ObjNVal attribute will give the value of the chosen objective on the chosen solution. This is illustrated in the following Python example:

# Read and solve a model with multiple objectives
m = read('input.mps')
m.optimize()

# get the set of variables
x = m.getVars()

# Ensure status is optimal
assert m.Status == GRB.Status.OPTIMAL

# Query number of multiple objectives, and number of solutions
nSolutions  = m.SolCount
nObjectives = m.NumObj
print('Problem has', nObjectives, 'objectives')
print('Gurobi found', nSolutions, 'solutions')

# For each solution, print value of first three variables, and
# value for each objective function
solutions = []
for s in range(nSolutions):
  # Set which solution we will query from now on
  m.params.SolutionNumber = s

  # Print objective value of this solution in each objective
  print('Solution', s, ':', end='')
  for o in range(nObjectives):
    # Set which objective we will query
    m.params.ObjNumber = o
    # Query the o-th objective value
    print(' ',m.ObjNVal, end='')

  # print first three variables in the solution
  n = min(len(x),3)
  for j in range(n):
    print(x[j].VarName, x[j].Xn, end='')
  print('')

  # query the full vector of the o-th solution
  solutions.append(m.getAttr('Xn',x))

Try Gurobi for Free

Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.

Evaluation License
Get a free, full-featured license of the Gurobi Optimizer to experience the performance, support, benchmarking and tuning services we provide as part of our product offering.
Academic License
Gurobi supports the teaching and use of optimization within academic institutions. We offer free, full-featured copies of Gurobi for use in class, and for research.
Cloud Trial

Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.

Search

Gurobi Optimization