Try our new documentation site (beta).


Model.addGenConstrIndicator()

addGenConstrIndicator ( binvar, binval, lhs, sense=None, rhs=None, name="" )

Add a new general constraint of type GRB.GENCONSTR_INDICATOR to a model.

An INDICATOR constraint <span>$</span>z = f \rightarrow a^Tx \leq b<span>$</span> states that if the binary indicator variable <span>$</span>z<span>$</span> is equal to <span>$</span>f<span>$</span>, where <span>$</span>f \in \{0,1\}<span>$</span>, then the linear constraint <span>$</span>a^Tx \leq b<span>$</span> should hold. On the other hand, if <span>$</span>z = 1-f<span>$</span>, the linear constraint may be violated. The sense of the linear constraint can also be specified to be <span>$</span>=<span>$</span> or <span>$</span>\geq<span>$</span>.

Note that the indicator variable <span>$</span>z<span>$</span> of a constraint will be forced to be binary, independent of how it was created.

You can also add an INDICATOR constraint using a special overloaded syntax. See the examples below for details.

Multiple INDICATOR constraints can be added in a single addGenConstrIndicator call by using matrix-friendly modeling objects. In this case, an MGenConstr object will be returned. The input arguments follow NumPy's broadcasting rules, with some restrictions:

  • binvar cannot be broadcasted over binval, and
  • the linear constraints defined by (lhs, sense, rhs) cannot be broadcasted over the indicator variable.
This means that via broadcasting, you can use a single indicator variable to control whether multiple linear constraints should hold. We refer you to the NumPy documentation for further information regarding broadcasting behaviour.

Arguments:

binvar (Var or MVar): The binary indicator variable or matrix variable.

binval (Boolean or ndarray): The value for the binary indicator variable that would force the linear constraint to be satisfied. Can be provided as an ndarray of distinct values if binvar is an MVar.

lhs (float, Var, LinExpr, MVar, MLinExpr, or TempConstr): Left-hand side expression for the linear constraint triggered by the indicator. Can be a constant, a Var, a LinExpr, an MVar, or an MLinExpr. Alternatively, a temporary constraint object can be used to define the linear constraint that is triggered by the indicator. The temporary constraint object is created using an overloaded comparison operator. See TempConstr for more information. In this case, the “sense” and “rhs” parameters must stay at their default values None.

sense (char): Sense for the linear constraint. Options are GRB.LESS_EQUAL, GRB.EQUAL, or GRB.GREATER_EQUAL.

rhs (float or ndarray): Right-hand side value for the linear constraint. Can be provided as an ndarray of distinct values if lhs is an MVar or an MLinExpr.

name (string, optional): Name for the new general constraint. Note that name will be stored as an ASCII string. Thus, a name like 'A<span>$</span>{\rightarrow}<span>$</span>B' will produce an error, because '<span>$</span>{\rightarrow}<span>$</span>' can not be represented as an ASCII character. Note also that names that contain spaces are strongly discouraged, because they can't be written to LP format files.

Return value:

New general constraint object. This can be a GenConstr or an MGenConstr depending on the types of the input arguments.

Example usage:

  # x7 = 1 -> x1 + 2 x3 + x4 = 1
  model.addGenConstrIndicator(x7, True, x1 + 2*x2 + x4, GRB.EQUAL, 1.0)

  # alternative form
  model.addGenConstrIndicator(x7, True, x1 + 2*x2 + x4 == 1.0)

  # overloaded form
  model.addConstr((x7 == 1) >> (x1 + 2*x2 + x4 == 1.0))

  # Matrix-friendly form where Z is an MVar. Creates multiple
  # indicator constraints, each specifying
  #   z_i = 1 -> sum a_ij x_j = b_i.
  model.addGenConstrIndicator(z, 1.0, A @ x == b)

  # Matrix-friendly form where z is an Var. Creates multiple
  # indicator constraints, each specifying
  #   z = 1 -> sum a_ij x_j = b_i
  # (the indicator variable is broadcasted).
  model.addGenConstrIndicator(z, 1.0, A @ x == b)

Try Gurobi for Free

Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.

Evaluation License
Get a free, full-featured license of the Gurobi Optimizer to experience the performance, support, benchmarking and tuning services we provide as part of our product offering.
Academic License
Gurobi supports the teaching and use of optimization within academic institutions. We offer free, full-featured copies of Gurobi for use in class, and for research.
Cloud Trial

Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.

Search

Gurobi Optimization