Scip 8
Scip 8
net/publication/357092094
CITATIONS READS
0 223
35 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Daniel Rehfeldt on 16 December 2021.
Abstract The SCIP Optimization Suite provides a collection of software packages for
mathematical optimization centered around the constraint integer programming frame-
work SCIP. This paper discusses enhancements and extensions contained in version 8.0
of the SCIP Optimization Suite. Major updates in SCIP include improvements in sym-
metry handling and decomposition algorithms, new cutting planes, a new plugin type for
cut selection, and a complete rework of the way nonlinear constraints are handled. Addi-
tionally, SCIP 8.0 now supports interfaces for Julia as well as Matlab. Further, UG now
includes a unified framework to parallelize all solvers, a utility to analyze computational
experiments has been added to GCG, dual solutions can be postsolved by PaPILO, new
heuristics and presolving methods were added to SCIP-SDP, and additional problem
classes and major performance improvements are available in SCIP-Jack.
∗ Extended author information is available at the end of the paper. The work for this article has
been partly conducted within the Research Campus MODAL funded by the German Federal Ministry
of Education and Research (BMBF grant number 05M14ZAM) and has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant agreement No 773897.
It has also been partly supported by the German Research Foundation (DFG) within the Collaborative
Research Center 805, Project A4, and the EXPRESS project of the priority program CoSIP (DFG-
SPP 1798), the German Research Foundation (DFG) within the project HPO-NAVI (project number
391087700).
1
1 Introduction
The SCIP Optimization Suite comprises a set of complementary software packages de-
signed to model and solve a large variety of mathematical optimization problems:
− the modeling language Zimpl [56],
− the presolving library PaPILO for linear and mixed-integer linear programs, a new
addition in version 7.0 of the SCIP Optimization Suite [81],
− the simplex-based linear programming solver SoPlex [117],
− the constraint integer programming solver SCIP [3], which can be used as a fast
standalone solver for mixed-integer linear and nonlinear programs and a flexible
branch-cut-and-price framework,
− the automatic decomposition solver GCG [32], and
− the UG framework for parallelization of branch-and-bound solvers [99].
All six tools can be downloaded in source code and are freely available for members of
noncommercial and academic institutions. They are accompanied by several extensions
for solving specific problem-classes such as the award-winning Steiner tree solver SCIP-
Jack [34] and the mixed-integer semidefinite programming solver SCIP-SDP [31]. This
paper describes the new features and enhanced algorithmic components contained in
version 8.0 of the SCIP Optimization Suite.
min c⊤ x
s.t. Ax ≥ b,
(1)
ℓ i ≤ xi ≤ ui for all i ∈ N ,
xi ∈ Z for all i ∈ I,
min f (x)
s.t. gk (x) ≤ 0 for all k ∈ M,
(2)
ℓ i ≤ xi ≤ ui for all i ∈ N ,
xi ∈ Z for all i ∈ I,
2
with arbitrary constraints and a linear objective function that satisfy the following prop-
erty: if all integer variables are fixed, the remaining subproblem must form a linear or
nonlinear program.
In order to solve CIPs, SCIP constructs relaxations—typically LP relaxations. If the
relaxation solution is not feasible for the current subproblem, the enforcement callbacks
of the constraint handlers need to take measures to eventually render the relaxation solu-
tion infeasible for the updated relaxation, for example by branching or separation. Being
a framework for solving CIPs, SCIP can be extended by plugins to be able to solve any
CIP. The default plugins included in the SCIP Optimization Suite provide tools to solve
MILPs and many MINLPs as well as some classes of instances from constraint program-
ming, satisfiability testing, and pseudo-Boolean optimization. Additionally, SCIP-SDP
allows to solve mixed-integer semidefinite programs.
The core of SCIP coordinates a central branch-cut-and-price algorithm. The meth-
ods for processing constraints of a given type are implemented in a corresponding con-
straint handler, and advanced methods like primal heuristics, branching rules, and cut-
ting plane separators can be integrated as plugins with a pre-defined interface. SCIP
comes with many such plugins needed to achieve a good MILP and MINLP performance.
In addition to plugins supplied as part of the SCIP distribution, new plugins can be cre-
ated by users. This basic design and solving process is described in more detail by
Achterberg [2].
By design, SCIP interacts closely with the other components of the SCIP Optimiza-
tion Suite. Optimization models formulated in Zimpl can be read by SCIP. PaPILO
provides an additional fast and effective presolving procedure that is called from a SCIP
presolver plugin. The linear programs (LPs) solved repeatedly during the branch-cut-
and-price algorithm are by default optimized with SoPlex. Interfaces to several exter-
nal LP solvers exist, and new interfaces can be added by users. GCG extends SCIP to
automatically detect problem structure and generically apply decomposition algorithms
based on the Dantzig-Wolfe or the Benders’ decomposition scheme. And finally, the
default instantiations of the UG framework use SCIP as a base solver in order to per-
form branch-and-bound in parallel computing environments with shared or distributed
memory architectures.
New Developments and Structure of the Paper This paper focuses on two main aspects.
The first one is to explain the changes and progress made in the solving process of SCIP
and analyze the resulting improvements on MILP and MINLP instances, both in terms
of performance and robustness. A performance comparison of SCIP 8.0 against SCIP
7.0 is carried out in Section 2. Improvements to the core of SCIP are presented in
Section 3 and include
− a new framework for handling nonlinear constraints,
− symmetry handling on general variables and improved orbitope detection,
− a new separator for mixing cuts,
− improvements to decomposition-based heuristics and the Benders decomposition
framework, and
− a new type of plugins for cut selection, and several technical improvements.
A more detailed explanation of the changes to the MINLP solving process and the new
expression framework is given in Section 4. Improvements to the default LP solver So-
Plex and presolver PaPILO are explained in Section 5 and 6 respectively. This aspect
will be of interest to the optimization community working on methods and algorithms re-
lated to these building blocks and to practitioners willing to understand the performance
they observe on their particular instances.
The second aspect of this paper is to present the evolving possibilities for working
with the SCIP Optimization Suite 8.0 for optimization practitioners. This includes
3
improvements and changes to the interfaces in Section 7 and the modeling language
Zimpl in Section 8; to SCIP extensions specialized for other computational settings
such as distributed computing with UG in Section 9 and Dantzig-Wolfe decompositions
with GCG in Section 10; and finally to SCIP extensions for particular problem classes
such as the mixed-integer semidefinite solver SCIP-SDP in Section 11 and the Steiner
tree solver SCIP-Jack in Section 12.
We use the SCIP Optimization Suite 7.0.0 as the baseline, including SoPlex 5.0.0 and
PaPILO 1.0.0, and compare it with SCIP Optimization Suite 8.0 including SoPlex 6.0
and PaPILO 2.0. Both were compiled using GCC 7.5, use Ipopt 3.12.13 as NLP sub-
solver built with the MUMPS 4.10.0 numerical linear algebra solver, CppAD 20180000.0
as algorithmic differentiation library, and bliss 0.73 for graph automorphisms to detect
symmetry in MIPs. The time limit was set to 7200 seconds in all cases.
The MILP instances are selected from the MIPLIB 2003, 2010, and 2017 [39] and
COR@L instance sets, including all instances previously solved by SCIP 7.0.0 with
at least one of five random seeds or newly solved by SCIP 8.0 with at least one of
five random seeds; this amounts to 347 instances. The MINLP instances are similarly
selected from the MINLPLib1 with newly solvable instances added to the ones previously
solved by SCIP 7 for a total of 113 instances.
All performance runs are carried out on identical machines with Intel Xeon CPUs
E5-2690 v4 @ 2.60GHz and 128GB in RAM. A single run is carried out on each machine
in a single-threaded mode. Each optimization problem is solved with SCIP using five
different seeds for random number generators. This results in a testset of 565 MINLPs
and 1735 MILPs. Instances for which the solver reported numerically inconsistent results
are excluded from the presented results.
The results of the performance runs on MILP instances are presented in Table 1. The
changes introduced with SCIP 8.0 improved the performance on MILPs both in terms
of number of solved instances and shifted geometric mean of the time. Furthermore,
1 https://www.minlplib.org
4
Table 1: Performance comparison for MILP instances
the difference in terms of geometric mean time is starker on harder instances, with an
improvement of up to 52% on instances taking more than 1000 seconds to solve. The
improvement is more limited on both-solved instances that were solved by both solvers,
for which the relative improvement is only of 11%. This indicates that the overall
speedup is due to newly solved instances more than to improvement on instances that
were already solved by SCIP 7.0.
With the major revision of the handling of nonlinear constraints, the performance of
SCIP on MINLPs has changed a lot on the instance set compared to SCIP 7.0. The
MINLP performance results are summarized in Table 2. On all subsets of the instances
selected by runtime, more instances are solved by SCIP 8.0 than by SCIP 7.0. Fur-
thermore, SCIP 8.0 solves the instances for each of these subsets with a shorter shifted
geometric mean time even though it produces more nodes in the branch-and-bound tree.
On the 382 instances solved by both versions, SCIP 8.0 requires fewer nodes and less
time. The number of instances solved by only one of the two versions (diff-timeouts) is
much higher than reported in previous release reports with similar experiments, with 66
instances newly solved by SCIP 8.0 and 46 instances previously solved that SCIP 8.0
did not succeed on.
A finer comparison of the two SCIP versions on additional subsets of instances is
provided in Table 3. Instances are split according to mixed-integer and continuous,
nonconvex and convex problems. They are classified as mixed-integer if at least one
integer or binary constraint is present in the original problem. The convexity of instances
is identical to the information provided on the MINLPLib website.
Table 3 shows that SCIP 8.0 brings most significant improvements for nonconvex
problems with 41 more instances solved and a drastic speedup factor of 3.54 on the
purely continuous nonconvex problems. Performance has, however, degraded on convex
problems with 21 instances that are not solved anymore and the shifted geometric mean
runtime more than tripled.
As can be seen in the table, this is mostly due to worse performance on a spe-
cific group of instances, the syn group of instances. The syn group includes specif-
ically instances syn40m04h, rsyn0840m03h, rsyn0820m02m, syn20h, syn30m03h, and
rsyn0840m04h. The solving time for syn instances has degraded significantly on SCIP 8.0
while the degradation is moderate on other convex instances. The much higher time on
the syn instances explains alone the degradation on the total convex subset. We pre-
sume that the new expression simplification at the moment obfuscates some structure
on some instances of the syn group that was exploited with SCIP 7.0.
An MINLP performance evaluation that focuses only on the changes in handling
nonlinear constraints is given in Section 4.14.
5
Table 2: Performance comparison for MINLP
3 SCIP
The SCIP 8.0 release comes with a major change in the way that nonlinear constraints
are handled. The main motivation for this change is twofold: First, it aims at increasing
the reliability of the solver and alleviating a numerical issue that arose from problem
reformulations and led to SCIP returning solutions that are feasible in the reformulated
problem, but infeasible in the original problem. Second, the new design of the nonlinear
framework reduces the ambiguity of expression and structure types by implementing
different plugin types for low-level expression types that define expressions, and high-
level structure types that add functionality for particular, often overlapping structures.
Finally, a number of new features for improving the solver’s performance on MINLPs
were introduced. A detailed description of the changes can be found in Sections 4
and 3.2.2.
Symmetries are well-known to have an adverse effect on the performance of MILP and
MINLP solvers, because symmetric subproblems are treated repeatedly without pro-
viding new information to the solver. For this reason, there exist different methods to
handle symmetries in SCIP. Until version 7.0, SCIP was only able to handle symmetries
in MILPs. With the release of SCIP 8.0 also symmetries in MINLPs can be handled.
Furthermore, the release of SCIP 8.0 features several algorithmic enhancements of ex-
isting as well as the implementation of further symmetry handling methods. In the
following, we describe the kind of symmetries SCIP can handle and list the techniques
used in SCIP 7.0. Afterwards, we describe the novel symmetry handling methods and
highlight algorithmic enhancements.
Let us start with some preliminary remarks. For a permutation γ of the variable
6
index set {1, . . . , n} and a vector x ∈ Rn , we define γ(x) = (xγ −1 (1) , . . . , xγ −1 (n) ). We say
that γ is a symmetry of (MINLP) if the following holds: x ∈ Rn is feasible for (MINLP) if
and only if γ(x) is feasible, and c⊤ x = c⊤ γ(x). The set of all symmetries forms a group Γ̄,
the symmetry group of (MINLP). Since computing Γ̄ is NP-hard, see Margot [72], one
typically refrains from handling all symmetries. Instead, one only computes a subgroup Γ
of Γ̄ that keeps the constraint system of (MINLP) invariant. Computing this formulation
group Γ for MILPs can be accomplished by computing symmetries of an auxiliary graph,
see Salvagnin [93]. In SCIP 8.0, the already existing routine for computing symmetries
of MILPs has been extended to handle also nonlinear constraints. To detect symmetries
of the auxiliary graphs, SCIP uses the graph isomorphism package bliss [49].
SCIP 7.0 used two paradigms to handle symmetries of binary variables: a constraint-
based approach or the pure propagation-based approach orbital fixing [70, 71, 80]. The
constraint-based approach is implemented via three different constraint handler plugins
to deal with different kinds of matrix symmetries. The symresack constraint handler [47]
provides separation and propagation routines for general permutations γ, whereas the
orbisack constraint handler [50] uses specialized separation and propagation methods if γ
is a composition of 2-cycles. The orbitope constraint handler [15, 47] handles symmetries
of special subgroups of Γ. These subgroups are required to act on binary matrices
and to be able to reorder their columns arbitrarily. Moreover, if the variables affected
by the corresponding permutations or groups interact with set packing or partitioning
constraints in a certain way, all constraint handlers provide specialized separation and
propagation mechanisms to find stronger cutting planes and reductions [46, 51, 52]. The
common ground of these constraint handlers is that they enforce solutions that are
lexicographically maximal in their orbit of symmetric solutions.
The integer parameter misc/usesymmetry can be used to enable/disable these two
methods. In SCIP 7.0, the parameter ranged between 0 and 3, where the Bit 1 en-
ables/disables the usage of the constraint-based approach and Bit 2 enables/disables
orbital fixing. If the group Γ is a product group Γ = Γ1 ⊗ · · · ⊗ Γk , the variables affected
by one factor of Γ are not affected by any other factor. In this case, one can apply
different symmetry handling methods for the different factors. The sets of variables af-
fected by the different factors are called the components of Γ. Thus, if both methods are
enabled, SCIP searches for independent components of the symmetry group Γ and, de-
pending on structural properties of the component, either uses cutting planes or orbital
fixing: if a component can be completely handled by orbitopes, SCIP uses orbitopes
and orbital fixing otherwise; see the SCIP Optimization Suit 7.0 release report [35] for
further details.
In SCIP 8.0, symmetry detection has been extended to handle two types of symmetries
in nonlinear constraints.
The detection of permutation-based symmetries is performed by analyzing expression
graphs as first proposed by Liberti [60]. The detected automorphisms are then projected
onto problem variables, which yields a permutation group.
Symmetries of a different type, referred to as complementary symmetries, are de-
tected in quadratic problems by considering affine transformations γ : Rb → Rn , γ(x) =
Rx + s, where R ∈ Rn×n , s ∈ Rn , and Rij may have the values 1, −1 or 0 and si may
be equal to either some constant di , or 0. Such a transformation defines a complemen-
tary symmetry if it preserves the objective and constraint functions. The detection is
7
performed by solving an auxiliary problem that compares coefficients before and after
substituting variables for their complements.
For more details, see Wegscheider [115].
One drawback of the mentioned approaches is that they can only handle symmetries of
binary variables, but not of general integer or continuous variables. Moreover, SCIP 7.0
can only detect orbitopes when a component of Γ can be completely handled by orbitopes,
but not if some part of the component allows applying orbitopes. In SCIP 8.0, both
issues are resolved by the implementation of further symmetry handling methods and a
refined detection and handling mechanism for orbitopes.
x ℓi ≥ x j , j ∈ Oi , i ∈ {1, . . . , k},
Improved Orbitope Detection As mentioned previously, SCIP 7.0 uses orbitopes for
a component of Γ only if all permutations within the component form an orbitope
8
structure. This, however, can be rather restrictive as illustrated next. Consider the
problem of coloring an undirected graph G = (V, E) with k colors. Every feasible
coloring can be encoded by a matrix X ∈ {0, 1}V ×k , where Xvi = 1 if and only if node v
is colored by color i. We can transfer the coloring X into another equivalent coloring Y
by taking an arbitrary permutation π of {1, . . . , k} and defining Yvi = Xvπ(i) . That
is, the symmetry group Γ of the coloring problem can reorder the columns of binary
matrices arbitrarily, and thus, allows the application of orbitopes as indicated above. If
the graph G is symmetric, however, Γ will also contain permutations that reorder the
rows of X according to automorphisms of G. Since these row permutations interact with
the variables affected by column permutations, they form a common component. Hence,
not all permutations within this component are permutations necessary for an orbitope
and the detection routine of SCIP 7.0 will not recognize the applicability of orbitopes.
In SCIP 8.0 we have refined the orbitope detection routine to be able to heuristically
find such hidden orbitopes. In the following, we call such orbitopes suborbitopes, because
they are defined by a subgroup of a component. To explain the procedure, note that a
subgroup of a component defines an orbitope for matrix X if the component contains
permutations that swap adjacent columns of X, see Hojny and Pfetsch [47]. Such a
swap of two columns is a permutation that decomposes into 2-cycles. Therefore, our
routine sieves all permutations P from a component that has such a decomposition.
Then, we iteratively build a set of permutations Q ⊆ P that define an orbitope or
several independent orbitopes. Initially, Q = ∅ and we check, one after another, whether
adding γ ∈ P to Q allows to define independent orbitopes. If this is possible, Q is
updated; otherwise, we continue with the next permutation in P and discard γ. To
check whether we can add γ to Q, we maintain a list of the orbitopes defined by the
permutations in Q so far. Then, γ is added to Q either if the variables affected by γ
are not contained in any of the already known orbitopes, or γ adds a new column to an
already existing orbitope, or it merges two existing orbitopes.
If a component can not completely be handled by a single orbitope, there might exist
variables that are not contained in one of the detected orbitopes, or several independent
suborbitopes are found that are linked via permutations not contained in Q. To partially
add the missing link, and thus to handle more symmetries, SCIP selects one of the
found orbitopes with variable matrix X ∈ {0, 1}s×t and computes one round of SST
cuts with X11 as leader. We refer to these inequalities as weak inequalities, because
they weakly connect the found orbitopes without exploiting any further group structure.
Besides weak inequalities, we can also add strong inequalities X11 ≥ X12 ≥ · · · ≥ X1t
for every found orbitope. These cuts are called strong because they also exploit the
group structure allowing to arbitrarily reorder the columns of the orbitope. Note that
the strong inequalities are implicitly added by orbitope constraints. In some situations,
however, SCIP adds strong inequalities instead of orbitopes as we explain next.
The detection of suborbitopes and application of strong and weak inequalities can be
controlled via the Boolean parameter propagating/symmetry/detectsubgroups. If the
parameter value is TRUE (default), SCIP searches for suborbitopes using the above mech-
anism. A found orbitopes is called useful if it has at least three columns. The reason for
this classification is that an orbitope with just two columns can also be handled by orbi-
sack constraints, which can more easily be combined with other symmetry handling con-
straints. Moreover, the Boolean parameters propagating/symmetry/addstrongsbcs
and propagating/symmetry/addweaksbcs enable/disable whether strong inequalities
are used if suborbitopes are not handled and whether weak inequalities are used to
handle more group structure, respectively.
SCIP’s Symmetry Handling Strategy As explained above, SCIP allows to handle sym-
metries using different strategies depending on the parameter misc/usesymmetry. If a
mixed strategy is used, SCIP analyzes the structure of the symmetry group’s compo-
nents and decides which strategy is used for which component. In one case, however,
9
#found > 1
add suborbitopes
no
no
these strategies can also be combined and applied to the same component: If both sym-
resacks and SST cuts for binary variables are enabled, SCIP computes SST cuts first.
The leaders of SST cuts then play a special role, because they need to attain the largest
values in their orbits. To make these cuts compatible with symresacks, one thus needs
to adapt the lexicographic order used by symresacks giving the leaders the highest rank.
Similarly, if suborbitopes are detected, the orbitopes can be made compatible with
symresacks for the permutations not used by the orbitopes by adapting the variable
order in a specific way: the variables of the first orbitope get the highest rank in the
lexicographic order, afterwards the variables of the succeeding orbitopes are listed, and
finally the variables not contained in any orbitope are added to the lexicographic order.
The exact mechanism how suborbitopes are combined with weak and strong inequalities
as well as symresacks is explained in Figure 1.
SCIP’s strategy to decide on which symmetry handling methods are used is carried
out in the following order, depending on the enabled strategies; by default, SCIP is
allowed to use all implemented symmetry handling methods (misc/usesymmetry = 7).
First, SCIP checks whether a component can be fully handled by orbitopes or whether
suborbitopes can be detected. If the component is handled by (sub)orbitopes, it gets
blocked and no other symmetry handling method can be applied to this component.
Second, SCIP adds SST cuts to all applicable non-blocked components and blocks these
components. If the selected leaders are binary, also symresacks can applied to this
component. Third, if a component has not been blocked yet, either symresacks or orbital
fixing is used to handle symmetries, depending on whether orbital fixing is active.
Besides new symmetry handling methods, SCIP 8.0 also contains more efficient imple-
mentations of previously available methods that we describe in turn.
First, as mentioned above, orbisack constraints allow to apply stronger cutting planes
or reductions if they interact with set packing or partitioning constraints in a certain way.
SCIP automatically checks whether such an upgrade is possible. The implementation
of this upgrade has been revised and is more efficient in SCIP 8.0.
Second, the symresack constraint handler separates so-called minimal cover inequali-
ties for symresacks. In SCIP 7.0, we used a quadratic time separation routine for these
inequalities. With the release of SCIP 8.0, these inequalities can be separated in linear
time, which also improves on the almost linear running time procedure by Hojny and
Pfetsch [47]. The linear time procedure makes use of the observation [47] that minimal
cover inequalities for symresacks can be separated by merging connected components of
10
an auxiliary graph. Using a disjoint-set data structure, an almost linear running time
could be achieved. In our new implementation, we exploit that the graph’s connected
components are either paths or cycles. Merging such connected components can be
realized using more efficient data structures based on a few arrays.
Finally, both the symresack and orbisack constraint handler provide routines to prop-
agate their constraints. While the previous implementation could miss some variable
fixings, the implementation in SCIP 8.0 allows to find all variable fixings that can be
derived from local variable bound information.
Although symresack and orbitope constraints have been available in SCIP since ver-
sion 5.0, these constraints could not be parsed in any file format. With the release of
SCIP 8.0, these constraints can be parsed when reading a cip file. Thus, users can
easily tell SCIP about the symmetries that are present in their problems and how to
handle them.
For a permutation γ of {1, . . . , n} and a vector x ∈ {0, 1}n , a symresack constraint
enforces that x is lexicographically not smaller than its permutation γ(x). This can be
encoded in a cip file using the line
symresack([varName1,…,varNameN],[γ(1),…,γ(n)]).
Since orbisacks are symresacks for permutations that decompose into 2-cycles, this struc-
ture can directly be encoded using an n2 ×2 matrix, where each row encodes the variables
that can be interchanged. The cip encoding is then given by
fullOrbisack(varName1-1,varName1-2.varName2-1,varName2-2. …).
If users know that in each row of the orbitope matrix at most or exactly one variable can
attain value 1, they can provide this information to SCIP by replacing fullOrbisack
by packOrbisack or partOrbisack, respectively.
Finally, an orbitope constraint for a variable matrix X ∈ {0, 1}m×n can be encoded
similarly to an orbisack by the line
fullOrbitope(varName1-1,…,varName1-N. ….varNameM-1,…,varNameM-N).
If in each row of the orbitope at most or exactly one variable can attain value 1,
fullOrbitope can be replaced by packOrbitope or partOrbitope, respectively, to
provide this information to SCIP.
Mixing cuts [10, 43] can effectively reduce the computational time to solve MIP for-
mulations of chance constrained programs (CCPs), especially for those in which the
uncertainty appears only in the right-hand side [64, 57, 1, 119]. In order to enhance the
capability of employing SCIP as a black box to solve such CCPs, SCIP 8.0 includes
a new separator called mixing, which leverages the variable bound relations [2, 67] to
construct mixing cuts. It is worthwhile remarking that though the development of this
feature is motivated by CCPs, the mixing separator can, however, be applied for other
MIPs as long as the related variable bound relations can be detected by SCIP.
Let us first review the variable bound relations in SCIP; for more details, see Achter-
berg [2] and the SCIP Optimization Suite 4.0 release report [67]. A variable bound
relation in SCIP is a linear constraint on two variables. As such, it is of the form
11
y ⋆ ax + b with a, b ∈ R and ⋆ ∈ {≤, ≥}. During the presolving process, SCIP derives
these relations either from two-variable linear constraints or general constraints by prob-
ing [95] and stores them in a data structure called variable bound graph. Such relations
can be used to, for example, tighten the bounds of variables through propagation [67] or
enhance the MIR cuts separation [69] in the subsequent main solution process. The mix-
ing cut separator uses a subclass of these relations, that is, those in which x is a binary
variable and y is a non-binary variable. Thereby, three families of cuts are constructed
which is discussed in detail in the following. For simplicity, we only consider the case
that y is a continuous variable but the result can also be applied to the case that y is
an integer variable.
≥-Mixing Cuts Consider the variable lower bounds of variable y ∈ [ℓ, u]:
y ≥ ai xi + bi , xi ∈ {0, 1}, i ∈ N . (3)
Without loss of generality, we impose the following assumption:
0 < ai ≤ u − ℓ and bi = ℓ for all i ∈ N . (A)
Indeed, assumption (A) can be guaranteed by applying the following preprocessing steps
in order:
(i) If ai < 0, variable xi can be complemented by 1 − xi . If ai = 0, y ≥ ai xi + bi can
be removed from (3) and ℓ′ := max{ℓ, bi } is the new lower bound for y.
(ii) If ai + bi ≤ ℓ, by ai > 0 (from (i)), constraint y ≥ ai xi + bi is implied by y ≥ ℓ and
hence can be removed from (3).
(iii) If bi > ℓ, by ai > 0 (from (i)), ℓ′ := bi is the new lower bound for y; if bi < ℓ, by
ai +bi > ℓ (from (ii)), relation y ≥ ai xi +bi can be changed into y ≥ (ai +bi −ℓ)xi +ℓ.
(iv) If ai > u − ℓ, by bi = ℓ (from (iii)), xi = 0 must hold and constraint y ≥ ai xi + ℓ
can be removed from (3).
By assumption (A), (3) can be presented in normalized form:
y ≥ ai xi + ℓ, xi ∈ {0, 1}, i ∈ N . (4)
Let {i1 , . . . , is } ⊆ N with s ∈ N such that ai1 ≤ · · · ≤ ais , and define ai0 := 0. Then
the ≥-mixing inequality [10, 43] is given by
X
s
y−ℓ≥ (aiτ − aiτ −1 ) xiτ . (5)
τ =1
≤-Mixing Cuts Using a similar analysis as that in variable lower bounds, the variable
upper bounds of variable y can be presented in normalized form:
y ≤ u − aj xj , xj ∈ {0, 1}, j ∈ M, (6)
where 0 < aj ≤ u − ℓ, j ∈ M. Let {j1 , . . . , jt } ⊆ M (t ∈ N) such that aj1 ≤ · · · ≤ ajt ,
and define aj0 := 0. Then the ≤-mixing inequality [10, 43] is given by
X
t
y ≤u− (ajτ − ajτ −1 ) xjτ . (7)
τ =1
Conflict Cuts Besides the ≥- and ≤-mixing cuts, the mixing separator also constructs
conflict cuts, which are derived by jointly considering (4) and (6). To be more specific,
let i′ ∈ N and j ′ ∈ M such that ai′ + ℓ > u − aj ′ . By y ≥ ai′ xi′ + ℓ and y ≤ u − aj ′ xj ′ ,
variables xi′ and xj ′ cannot simultaneously take values at one, and hence the conflict
inequality
x i′ + x j ′ ≤ 1 (8)
can be derived.
12
Separation Given a fractional point (x∗ , y ∗ ), the separation problem of (5), (7) or (8)
asks to find an inequality violated by (x∗ , y ∗ ) or prove that no such one exists. To
separate the ≥-mixing inequalities (5), Günlük and Pochet [43] provided Ps the following
algorithm, which selects the subset S = {i1 , . . . , iτ } ⊆ N such that τ =1 (aiτ −aiτ −1 ) x∗iτ
is maximized.
1. Reorder variables xi , i ∈ N , such that x∗1 ≥ x∗2 ≥ · · · ≥ x∗|N | .
2. Add 1 to set S.
3. For each i ∈ N \{1}, set S := S ∪ {i} if ai > ak , where k is last index added into S.
4. If the ≥-mixing inequality corresponding to S is violated by (x∗ , y ∗ ), output it.
Obviously, the above algorithm can be implemented to run in O(|N | log(|N |)). Similarly,
the ≤-mixing inequalities (7) can also be separated in O(|M| log(|M|)). Finally, for the
conflict inequalities (8), since number of them is bounded by O(|M||N |), by enumeration,
they can be separated in O(|M||N |).
The performance impact of the mixing separator is neutral on the internal MIP
benchmark testset. However, when applied to the chance constrained lot sizing instances
used by Zhao et al. [119] (90 in total), a speedup of 20% can be observed and 15 more
instances can be solved.
SCIP 8.0 comes with an improvement of the heuristic Penalty Alternating Direction
Method (PADM) and introduces the new heuristic Dynamic Partition Search (DPS). Both
heuristics explicitly require a decomposition provided by the user and therefore belong
to the class of so-called decomposition heuristics. A decomposition consisting of k ≥ 0
blocks is a partition
D := (Drow , D col ) with Drow := (D1row , . . . , Dkrow , Lrow ), D col := (D1col , . . . , Dkcol , Lcol )
of the rows/columns of the constraint matrix A into k + 1 pieces each. The distinguished
rows Lrow and columns Lcol are called linking rows and linking columns, respectively. If
A is permuted according to decomposition D, a (bordered) block diagonal form [19] is
obtained. A detailed description of decompositions and their handling in SCIP can be
found in the release report for version 7.0 [35].
Since version 7.0, SCIP includes the decomposition heuristic Penalty Alternating Direc-
tion Method (PADM). For the current version PADM has been extended by the option to
improve a found solution by reintroducing the original objective function.
This heuristic splits a MINLP as listed in (2) into several subproblems according
to a given decomposition D with linking variables only, whereby the linking variables
get copied and the differences are penalized. Then the subproblems are solved by an
alternating procedure. A detailed description of penalty alternating direction methods
and their practical application can be found in Geißler et al. [37] and Schewe et al. [96].
To converge faster to a feasible solution, the original objective function of each sub-
problem has been completely replaced by a penalty term. Since this can lead to arbitrar-
ily bad solutions, the heuristic was extended in the following way: Initially, the original
version of PADM runs and tries to find a feasible solution. If a feasible solution was found
the linking variables are fixed to the values of this solution and each independent subprob-
lem is solved again but now with the original objective function. In order to accelerate
the solving of the reoptimization step, the already found solution is used as a warm start
13
solution and very small solving limits are imposed. The additional reoptimization step
must not take more time than was already used by the heuristic in the first step and
the node limit is set to one. By setting the parameter heuristics/padm/reoptimize
the feature of using a second reoptimization step in PADM can be turned on/off (default:
on).
The new feature was tested on the MIPLIB 2017 [39] benchmark instances, for which
decompositions are provided on the web page. If PADM could get called, preliminary
results show that PADM finds a feasible solution in 15 of 31 cases. The new reoptimization
step successfully improves the solution of PADM in 33% of these cases by 42% on average.
With SCIP 8.0 the new decomposition heuristic Dynamic Partition Search (DPS) was
added. It is a primal construction heuristic which requires a decomposition with linking
constraints only.
The DPS heuristic splits a MILP as listed in (1) into several subproblems according
to a decomposition D. Thereby the linking constraints and their right/left-hand sides
row
are also split by introducing new parameters pq ∈ R|L | for each block q ∈ {1, . . . , k}
and requiring that
Xk
pq = b[Lrow ] (9)
q=1
holds. To obtain information about the infeasibility of one subproblem and to speed up
|Lrow |
the solving process, slack variables zq ∈ R+ are added and the objective function is
replaced by a weighted sum of these slack variables. In detail, for penalty parameter
|Lrow |
λ ∈ R>0 each subproblem q has the form
min λ⊤ zq ,
s.t. A[Dqrow ,Dqcol ] x[Dqcol ] ≥ b[Dqrow ] ,
ℓ i ≤ xi ≤ ui for all i ∈ N ∩ Dqcol ,
(10)
xi ∈ Z for all i ∈ I ∩ Dqcol ,
A[Lrow ,Dqcol ] x[Dqcol ] + zq ≥ pq ,
|Lrow |
zq ∈ R+ .
From (10), it is immediately apparent that the correct choice of pq plays the central
role. Because if pq is chosen for each subproblem q such that the slack variables zq take
the value zero, one immediately obtains a feasible solution. For this reason, we refer to
(pq )q∈{1,...,k} as a partition of b[Lrow ] . The goal of DPS is to find a feasible partition as
fast as possible.
To get started, an initial partition (pq )q∈{1,...,k} is chosen, which fulfills (9). Then it
is checked whether this partition will lead to a feasible solution by solving k independent
subproblems (10) with fixed pq . If all subproblems have an optimal objective value of
zero, a feasible solution was found and is given by the concatenation of the k subsolutions.
Conversely, a lower bound on the objective function value of one subproblem of greater
than zero immediately provides evidence that the current partition does not lead to a
feasible solution.
If the current partition does not correspond to a feasible solution, then partition
(pq )q∈{1,...,k} and penalty parameter λ have to be updated: For each single linking
constraint j ∈ Lrow the value vector zj is subtracted from the current partition pj and
the same amount is added to all blocks with zjq = 0, so that (9) still holds. If at
least one slack variable is positive, the corresponding penalty parameter is increased.
14
Then, the subproblems are solved again and the steps are repeated until a feasible
solution is found or until a maximum number of iterations (controlled by parameter
heuristics/dps/maxiterations) is reached.
To push the slack variables to zero and to speed up the algorithm, the original
objective function has been completely replaced by a penalty term. Analogously to
PADM (see Section 3.4.1) it is possible to improve the found solution by reoptimizing
with the original objective function. In DPS the partition instead of the linking variables
is fixed. By setting parameter heuristics/dps/reoptimize this feature can be turned
on/off (default: off).
The new decomposition heuristic was tested on the MIPLIB 2017 [39] benchmark
instances, for which decompositions are provided on the web page. If DPS could get
called, preliminary results show that DPS finds a feasible solution in 17 of 80 cases.
A general performance improvement can not be shown. The main reason for these
slightly disappointing results is probably that DPS requires a well-decomposable problem
structure. The evaluated instances are general MILPs which do not necessarily have such
a structure. However, on two instances (proteindesign121hz512p9 and 30n20b8) DPS
is successful and reduces the time until the first found primal solution highly, since
no other heuristic is able to construct a feasible solution at or before the root node.
It is noticeable that in both instances the linking constraints contain only bounded
integer variables. The heuristic probably benefits from this, since the number of usable
partitions is thus countable and finite.
The work on the Benders’ decomposition framework has moved into a research phase.
As such, only minor updates and bug fixes have been completed for the framework since
the release of SCIP 7.0. The most important update for the Benders’ decomposition
framework is the option to apply the mixed integer rounding (MIR) procedure, as de-
scribed by Achterberg [2], when generating optimality cuts. The aim of applying the
MIR procedure to the generated optimality cut is to potentially compute a stronger
inequality.
Strengthening the classical Benders’ optimality cut using the MIR procedure involves
the following steps:
− Generate a classical optimality cut from the solution of the Benders’ decomposition
subproblem.
− Attempt to compute a flow cover cut for the generated optimality cut. This is
achieved by calling SCIPcalcFlowCover. If this process is successful, replace the
optimality cut with the computed flow cover cut.
− Attempt to perform the MIR procedure on the optimality cut (this could have
been updated in the previous step). The MIR procedure is performed by calling
SCIPcalcMIR. If the MIR procedure is successful, the optimality cut is replaced with
the resulting inequality.
− Finally, SCIPcutsTightenCoefficients is executed in an attempt to tighten the
coefficients of the optimality cut.
The MIR procedure is active by default. A new parameter
benders/<bendersname>/benderscut/optimality/mir,
where <bendersname> is the name of the Benders’ decomposition plugin, has been added
to enable/disable the MIR procedure for strengthening the Benders’ optimality cuts.
15
3.6 Cut Selectors
The new cut selector plugin is introduced in SCIP 8.0. Users now have the ability to
create their own cut selection rules and include them into SCIP. For a current summary
on the state of cut selection in the literature, see Dey and Molinaro [23], and for an
overview of cutting plane measures and the improvements provided by intelligent se-
lection, see Wesselmann and Suhl [116]. The existing rule used since SCIP 6.0 [38] has
been moved to cutselection/hybrid. The ability to include cut selectors has also been
implemented through PySCIPOpt.
Thread Safety In previous versions, SCIP contained the argument PARASCIP for the
Make and CMake build system to make it thread-safe. This has been replaced by
THREADSAFE, which is now true by default (PARASCIP still exists for backward compati-
bility).
Most parts of SCIP are in fact always thread-safe, but interfaces to external programs
are sometimes not. For instance, for the LP-solver Gurobi, the thread-safe mode opens
a new LP-environment for each thread. Other interfaces to external software may use
parallelization that has to be controlled in order not to mix data from different threads,
e.g., CppAD and FilterSQP. The change to thread-safe mode should not significantly
affect performance.
Revision of External Memory Estimation SCIP usually uses its own internal memory
functions. This allows to keep track of the used memory. If it approaches the memory
limit, SCIP can switch to a memory saving mode, which, for instance, uses depth-first-
search. However, memory used by external software, in particular, NLP and LP-solvers
cannot easily be determined in a portable way. Therefore, the estimation of used memory
in SCIP has been improved for version 8 with data-fitting as follows. The memory
consumption by LP-solvers was measured using a stand-alone version on a testset of
LP-relaxations. Then a linear regression with the number of constraints, variables, and
nonzeros as features was computed. This current estimation uses the weights 8.5 × 10−4 ,
7.6 × 10−4 , 3.5 × 10−5 , respectively, and works quite well (for SoPlex we got R2 = 0.99).
If NLPs are solved, the estimation is doubled.
Option to Forbid Variable Aggregation Similar to multi-aggregation, one can now for-
bid aggregation of a variable by calling the function SCIPdoNotAggrVar(). This is
sometimes useful, for example, if certain constraint handlers cannot handle aggregated
variables. Note, however, that this can slow down the solving process since the relax-
ations tend to be larger.
Debugging of Variable Uses SCIP counts the number of uses of a variable and frees a
variable when its uses count reaches zero. It is therefore important to capture a variable
to prevent it from being freed too early and to release a variable when it is no longer
used. To assist on finding a missing or excessive capture or release of a variable, code has
been added to var.c to print the function call stack when a variable of a specified name,
optionally in a SCIP problem of a specified name, is captured or released. The code
requires GCC Gnulib (execinfo.h in particular) and will not work on every platform.
To activate this feature, define DEBUGUSES_VARNAME and DEBUGUSES_PROBNAME in
var.c. If the tool addr2line is available on the system, the printed call stacks pro-
vide more information, but its use causes a significant slowdown. Defining DEBUGUSES_
NOADDR2LINE disables the call of this tool.
16
Improving Numerical Properties of Linear Inequalities When a constraint handler or
separator computes a cutting plane, often its numerical properties need to be checked
and possibly improved before it is added to a relaxation. Further, changing coefficients
or sides of a SCIP_ROW may round numbers that are very close to integral values, which
may invalidate a previously valid cut. To assist on carefully improving the numerical
properties of an inequality, the SCIP_ROWPREP datastructure has been made available,
see pub_misc_rowprep.h. Routines are available to relax or scale linear inequalities to
improve the range of coefficients and to avoid almost-integral numbers, see the paragraph
“Cut cleanup” in Section 4.2.10 and Step 4 in Section 4.2.11 for more details. Note, that
ranged linear constraints (both left-hand-side and right-hand-side being finite) cannot
be handled.
Reader for AMPL .nl Files The reader for .nl files has been rewritten and is now
included with SCIP’s default plugins. See Section 7.1 for more details.
2
Q
·2 ·2
log y
With SCIP 7.0 and before, the following node types were supported in algebraic
expressions:
− constant, parameter,
− variable (specified by integer index),
− addition, subtraction, multiplication, division (with two arguments),
− square, square-root, power (rational exponent), power (integer exponent), signed
power (x 7→ sign(x)|x|p ),
− exponentiation, natural logarithm,
− minimum, maximum, absolute value,
− sum, product, affine-linear, quadratic, signomial (with arbitrarily many arguments),
17
− user-defined.
The operand type “user-defined”, which was introduced with SCIP 3.2 [33], brought some
of the extensibility typical for SCIP plugins. However, only the most essential callbacks
(evaluation, differentiation, linear under/overestimation) were defined for user-defined
expressions. Thus, other routines that worked on expressions, such as simplification,
had built-in treatment for operands integrated in SCIP, but defaulted to some simple
conservative behavior when a user-defined operand had to be dealt with.
Another problem with this design of expressions was the ambiguity and additional
complexity due to the presence of high-level operators such as affine-linear, quadratic,
and others. For example, code that did some operation on a sum had to implement the
same routine for any operand that represents some form of summation (plus, sum, affine-
linear, quadratic, signomial), each time dealing with a slightly different data structure.
With SCIP 8, the expression system has been completely rewritten. Proper SCIP
plugins, referred to as expression handlers, are now used to define all semantics of an
operand. These expression handlers support more callbacks than what was available
for the user-defined operator before. Furthermore, much ambiguity and complexity is
avoided by adding expression handlers for basic operations only. High-level structures
such as quadratic functions can still be recognized, but are no longer made explicit by
a change in the expression type. An expression (SCIP_EXPR) comprises various data,
in particular the arguments of the expression, further on denoted as children. It can
also hold the data and additional callbacks of an expression owner, if any. A prominent
example of an expression owner is the constraint handler for nonlinear constraints, see
Section 4.2, which stores data associated with the enforcement of nonlinear constraints
in expressions that are used to specify nonlinear constraints. Further, due to their many
use cases, a representation of the expression as a quadratic function can be stored, see
Section 4.3.1 for details.
Various methods are available in the SCIP core to manage expressions (create, modify,
copy, free, parse, print), to evaluate and compute derivative information at a point,
to evaluate over intervals, to simplify, to identify common subexpressions, to check
curvature and integrality, and to iterate over it. Many of these methods access callbacks
that can be implemented by expression handlers. Some additional callbacks are used by
the constraint handler for nonlinear constraints (Section 4.2). The expression handler
callbacks are:
− COPYHDLR: include expression handler in another SCIP instance;
− FREEHDLR: free expression handler data;
− COPYDATA: copy expression data, for example, the coefficients of a linear sum;
− FREEDATA: free expression data;
− PRINT: print expression;
− PARSE: parse expression from string;
− CURVATURE: detect convexity or concavity;
− MONOTONICITY: detect monotonicity;
− INTEGRALITY: detect integrality (is value of operation integral if arguments have
integral value?);
− HASH: hash expression using hash values of arguments;
− COMPARE: compare two expressions of same type;
− EVAL: evaluate expression (implementation of this callback is mandatory);
− BWDIFF: evaluate partial derivative of expression with respect to specified argument
(backward derivative evaluation);
− FWDIFF: evaluate directional derivative of expression (forward derivative evaluation);
18
− BWFWDIFF: evaluate directional derivative of partial derivative with respect to speci-
fied argument (backward over forward derivative);
− INTEVAL: evaluate expression over interval;
− ESTIMATE: compute linear under- or overestimator of expression with respect to given
bounds on arguments and a reference point;
− INITESTIMATES: compute one or several linear under- or overestimators of expression
with respect to given bounds on arguments;
− SIMPLIFY: simplify expression by applying algebraic transformations;
− REVERSEPROP: compute bounds on arguments of expression from given bounds on
expression.
The SCIP documentation provides more details on these callbacks.
Finally, for the following operators, expression handlers are included in SCIP 8.0:
− val: scalar constant;
− var: a SCIP variable (SCIP_VAR);
− varidx: a variable represented by an index; this handler is only used for interfaces
to NLP solvers (NLPI);
Pk
− sum: an affine-linear function, y 7→ a0 + j=1 aj yj for y ∈ Rk with constant coeffi-
cients a ∈ Rk+1 ;
Qk
− prod: a product, y 7→ c j=1 yj for y ∈ Rk with constant factor c ∈ R;
− pow: a power with a constant exponent, y 7→ y p for y ∈ R and exponent p ∈ R (if
p 6∈ Z, then y ≥ 0 is required);
− signpower: a signed power, y 7→ sign(y)|y|p for y ∈ R and constant exponent p ∈ R,
p > 1;
− exp: exponentiation, y 7→ exp(y) for y ∈ R;
− log: natural logarithm, y 7→ log(y) for y ∈ R>0 ;
(
−y log(y), if y > 0,
− entropy: entropy, y 7→ for y ∈ R≥0 ;
0, if y = 0,
− sin: sine, y 7→ sin(y) for y ∈ R;
− cos: cosine, y 7→ cos(y) for y ∈ R;
− abs: absolute value, y 7→ |y| for y ∈ R.
When comparing with the list for SCIP 7.0 above, one observes that support for parame-
ters (these behaved like constants but could not be simplified away and were modifiable)
and operators “min” and “max” has been removed. Further, support for sine, cosine,
and the entropy function has been added.
For SCIP 8, the constraint handler for general nonlinear constraints (cons_nonlinear)
has been rewritten and the specialized constraint handlers for quadratic, second-order
cone, absolute power, and bivariate constraints have been removed. Some of the unique
functionalities of the removed constraint handlers has been reimplemented in other plu-
gin types.
19
4.2.1 Motivation
An initial motivation for the rewrite of cons_nonlinear has been a numerical issue
which is caused by the explicit reformulation of constraints in SCIP 7.0 and earlier
versions. For an example, consider the problem
min z,
s.t. exp(ln(1000) + 1 + x y) ≤ z, (11)
x + y ≤ 2,
2 2
with optimal solution x = −1, y = 1, z = 1000. Previously, solving this problem with
SCIP could end with the following solution report:
SCIP Status : problem is solved [optimal solution found]
Solving Time (sec) : 0.08
Solving Nodes : 5
Primal Bound : +9.99999656552062e+02 (3 solutions)
Dual Bound : +9.99999656552062e+02
Gap : 0.00 %
[nonlinear] <e1>: exp((7.9077552789821368 + (<x>*<y>)))-<z>[C] <= 0;
violation: right-hand side is violated by 0.000673453314561812
best solution is not feasible in original problem
x -1.00057454873626 (obj:0)
y 0.999425451364613 (obj:0)
z 999.999656552061 (obj:1)
The reason that SCIP initially determined this solution to be feasible is that, in
presolve, the problem gets rewritten as
min z,
s.t. exp(w) ≤ z,
(12)
ln(1000) + 1 + x y = w,
x2 + y 2 ≤ 2.
The constraints in this transformed problem are violated by 0.4659 · 10−6 , 0.6731 · 10−6 ,
and 0.6602 · 10−6 , thus are feasible with respect to numerics/feastol= 10−6 , and
therefore the solution is accepted by SCIP. On the MINLPLib library, the problem that
a final solution is feasible for the presolved problem but violates nonlinear constraints
in the original problem occurred for 7% of all instances.
Problem (11) gets rewritten as (12) for the purpose of constructing a linear relaxation.
In this process, nonlinear functions are approximated by linear under- and overestima-
tors. As the formulas that were used to compute these estimators are only available for
“simple” functions (for example, convex functions, concave functions, bilinear terms),
new variables and constraints were introduced to split more complex expressions into
adequate form [104, 113].
A trivial attempt to solve the issue of solutions not being feasible in the original
problem would have been to add a feasibility check before accepting a solution. However,
if a solution is not feasible, actions to resolve the violation of original constraint need to
be taken, such as a separating hyperplane, a domain reduction, or a branching operation
needs to be performed. Since the connection from the original to the presolved problem
was not preserved, it would not have been clear which operations on the presolved
problem would help best to remedy the violation in the original problem.
Thus, the new constraint handler aims to preserve the original constraints by ap-
plying only transformations (simplifications) that, in most situations, do not relax the
20
feasible space when taking tolerances into account. The reformulations that were neces-
sary for the construction of a linear relaxation are not applied explicitly anymore, but
handled implicitly by annotating the expressions that define the nonlinear constraints
(here, the mysterious “data of an expression owner”, see Section 4.1, comes into play).
Another advantage of this approach is a clear distinction between the variables that were
present in the original problem and the variables added for the reformulation. With this
information, branching is avoided on variables of the latter type. Finally, it is now
possible to exploit overlapping structures in an expression simultaneously.
To explain the functionality of the new cons_nonlinear, consider MINLPs of the form
min c⊤ x,
s.t. g ≤ g(x) ≤ g,
b ≤ Ax ≤ b, (MINLP)
x ≤ x ≤ x,
x I ∈ ZI ,
m m m̃ n
with c ∈ Rn , g : Rn → R , g, g ∈ R , A ∈ Rm̃×n , b, b ∈ R , x, x ∈ R , I ⊆
{1, . . . , n}, R := R ∪ {±∞}. Further, assume that gi (·) is nonlinear and specified by an
expression (see Section 4.1), i = 1, . . . , m, g ≤ g, g i ∈ R or g i ∈ R for all i = 1, . . . , m,
b ≤ b, bi ∈ R or bi ∈ R for all i = 1, . . . , m̃, and x ≤ x. All nonlinear constraints
g ≤ g(x) ≤ g are handled by cons_nonlinear, while the linear constraints are handled
by cons_linear or its specializations. (Of course, in general, any kind of constraint that
SCIP supports is allowed, but for this section only linear and nonlinear constraints are
considered.) In comparison to SCIP 7.0, the specialized nonlinear constraint handlers
and the distinction into a linear and a nonlinear part of a nonlinear constraint have been
removed. As a consequence, all algorithms for nonlinear constraints (checking feasibility,
domain propagation, separation, etc) work on expressions now.
SCIP solves problems like (MINLP) to global optimality via a spatial branch-and-
bound algorithm that mixes branch-and-infer and branch-and-cut [14]. Important parts
of the solution algorithm are presolving, domain propagation (that is, tightening of vari-
able bounds), linear relaxation, and branching. For the domain propagation and linear
relaxation aspects, two extended formulations of (MINLP) that are obtained by intro-
ducing slack variables and replacing sub-trees of the expressions that define nonlinear
constraints by auxiliary variables are considered.
For domain propagation, the following extended formulation is considered:
min c⊤ x,
s.t. hdp dp dp dp
i (x, wi+1 , . . . , wmdp ) = wi , i = 1, . . . , mdp ,
b ≤ Ax ≤ b,
(MINLPdp
ext )
x ≤ x ≤ x,
wdp ≤ wdp ≤ wdp ,
x I ∈ ZI .
21
replace subexpression of hdpi (x) always receive an index larger than max(m, i), the result
is referred to by hdpi (x, w dp dp dp
i+1 , . . . , wmdp ) for any i = 1, . . . , m . That is, to simplify
dp dp
notation, wi+1 is used instead of wmax(i,m)+1 . If a subexpressions that is replaced by
an auxiliary variable appears in several places, then only one auxiliary variable and one
constraint is added to the extended formulation. Reindexing may be necessary to have
hdp dp
i depend on x and wi+1 , . . . only.
The details of how subexpressions are chosen to be replaced by auxiliary variables will
be discussed in Section 4.2.5. For the moment it is sufficient to assume that algorithms
are available to compute interval enclosures of
{hdp dp dp
i (x, wi+1 , . . . , wmdp ) : x ≤ x ≤ x, w
dp
≤ wdp ≤ wdp }, (13)
{xj : hdp dp dp
i (x, wi+1 , . . . , wmdp ) = widp : x ≤ x ≤ x, w dp
≤w dp dp
≤ w }, j = 1, . . . , n,
(14)
{wjdp : hdp dp dp dp
i (x, wi+1 , . . . , wmdp ) = wi : x ≤ x ≤ x, w
dp
≤ wdp ≤ wdp }, (15)
dp
j = i + 1, . . . , m ,
mdp
for i = 1, . . . , mdp . The variable bounds wdp , wdp ∈ R are initially set to wdp
i = gi ,
wdp dp dp
i = g i , i = 1, . . . , m, and w i = −∞, w i = ∞, i = m + 1, . . . , m .
dp
dp
It is worth noting here that the variables w are not actually added as SCIP vari-
ables, although this has been suggested, but merely serve notational purposes. In the
context of domain propagation, only the bounds wdp and wdp are relevant and stored
in the expression.
For the construction of a linear relaxation, a similar extended formulation is consid-
ered:
min c⊤ x,
s.t. hlp lp lp lp
i (x, wi+1 , . . . , wmlp ) ⋚i wi , i = 1, . . . , mlp ,
b ≤ Ax ≤ b,
(MINLPlp
ext )
x ≤ x ≤ x,
wlp ≤ wlp ≤ wlp ,
x I ∈ ZI .
Functions hlp
i (·) are again obtained from the expressions that define functions gi (·) by
lp lp
recursively replacing subexpressions by auxiliary variables wi+1 , . . . , wm lp . However, it
is important to note that different subexpressions may be replaced when setting up hlp (·)
compared to setting up hdp (·). In fact, in contrast to (MINLPdp ext ), it is assumed that
algorithms are available to compute a linear outer-approximation of the sets
{(x, wlp ) : hlp lp lp lp lp lp lp
i (x, wi+1 , . . . , wmlp ) ⋚i wi , x ∈ [x, x], w ∈ [w , w ]}, i = 1, . . . , m .
lp
(16)
Thus, the auxiliary variables wilp , i = m + 1, . . . , mlp , can be different from widp , i = m +
1, . . . , mdp . However, the slack variables wilp , i = 1, . . . , m, can be considered as identical
mlp
to widp . Similarly to (MINLPdp ext ), the variable bounds w lp
, w lp
∈ R are initially set
lp lp lp lp
to wi = g i , wi = g i , i = 1, . . . , m, and wi = −∞, wi = ∞, i = m + 1, . . . , mlp .
Regarding the (in)equality sense ⋚i , a valid simplification would be to assume equality
everywhere. For performance reasons, though, it can be beneficial to relax certain
equalities to inequalities if that does not change the feasible space of (MINLPlp ext ) when
projected onto x. Therefore,
=, if g i > −∞, g i < ∞,
⋚i := ≤, if g i = −∞, g i < ∞, for i = 1, . . . , m.
≥, if g i > −∞, g i = ∞,
22
For i > m, monotonicity of expressions needs to be taken into account. This is discussed
in Section 4.2.3.
In difference to (MINLPdp ext ), the variables w
lp
are added to SCIP as variables when
the LP is initialized. They are marked as relaxation-only [35], that is, are not copied
when the SCIP problem is copied and are fixed or deleted when restarting (new auxiliary
variables are added for the next SCIP round).
To decide for which constraints in (MINLPlp ext ) it can make sense to try to improve
their linear relaxation, the value of a subexpression needs to be compared with the value
for hlp lp lp
i (·). Thus, define ĥi (x) to be the value of the subexpression that hi (·) represents
lp
if evaluated at x. Formally, for i = 1, . . . , m ,
ĥlp lp lp lp lp lp lp lp lp
i (x) := hi (x, wi+1 , . . . , wmlp ) where wj := hj (x, wj+1 , . . . , wmlp ), j = i + 1, . . . , m .
hdp dp dp 2 dp 2 dp
1 (x, y, w2 ) := (w2 ) + 2w2 y + y = w1 ,
hdp dp
2 (x, y) := log(x) = w2 ,
w1dp ≤ 4.
(MINLPlp
ext ) could be very similar,
hlp lp lp 2 lp lp
1 (x, y, w2 ) := (w2 ) + 2w2 y + y ≤ w1 ,
2
hlp lp
2 (x, y) := log(x) = w2 ,
w1lp ≤ 4,
where equality has been chosen for hlp lp lp 2 lp 2
2 (x, y) = w2 because (w2 ) + 2w2 y + y is neither
monotonically increasing nor monotonically decreasing in w2lp . If, however, y ≥ 0 and
x ≥ 1, then one may relax to log(x) ≤ w2lp .
Next, consider the following slight modification:
log(x)2 + 4 log(x) y + y 2 ≤ 4.
SCIP may again replace log(x) by an auxiliary variable w2 , since that results in a
bivariate quadratic form, but the expression is not convex anymore. SCIP may therefore
decide to introduce additional auxiliary variables to disaggregate the quadratic form for
the purpose of constructing a linear relaxation. Therefore, while (MINLPdp ext ) would be
the same as above (with coefficient 2 changed to 4), (MINLPlp ext ) would be the result of
associating an auxiliary variable with every node of the expression graph:
w2lp + 4w3lp + w4lp ≤ w1lp ,
(w5lp )2 ≤ w2lp ,
w5lp y ≤ w3lp ,
y 2 ≤ w4lp ,
log(x) = w5lp ,
w1lp ≤ 4.
23
4.2.3 Variable and Expression Locks
For constraints that are checked for feasibility, SCIP asks the constraint handler to
add down- and uplocks to the variables in the constraint. A downlock (uplock) indi-
cates whether decreasing (increasing) the variable could render the constraint infeasible.
While it would be valid to add both down- and uplocks for each variable, more precise
information can be useful, for example, for the effectiveness of primal heuristic or dual
presolving routines.
For constraints as in (MINLP), the monotonicity of g(x) and Ax with respect to a
specific variable and the finiteness of left- and right-hand sides (g, g, b, b) decides which
locks should be added. While for Ax it is sufficient to check the sign of matrix entries,
the monotonicity of g(x) can sometimes be deduced by analyzing the expression that
defines g(x). Since monotonicity of g(x) may depend on variable values, variable bounds
should be taken into account when deriving monotonicity information and variable locks.
To derive locks for variables, the down- and uplocks for variables are generalized
to expressions. That is, in each expression e a number of down- and uplocks (although
they are referred to as negative and positive locks in the code) are stored, which indicate
the number of constraints that could become infeasible when the value of e is decreased
or increased. For variable-expressions, these down- and uplocks are then exactly the
required down- and uplocks of the corresponding variables.
To start, take a constraint g j ≤ gj (x) ≤ g j and assume that the expression that
defines gj (x) is given as g̃(f1 (x), f2 (x), . . .) for some operand g̃ and (sub)expressions
f1 , f2 , . . .. If g j < ∞, then increasing the value of g̃ could render the constraint infeasible,
so an uplock is added to g̃. Analogously, if g j > −∞, then decreasing the value of g̃
could render the constraint infeasible, so a downlock is added to g̃.
Next, these locks are “propagated” to the children f1 , f2 , . . .. First, the monotonicity
of g̃ with respect to a child fk is checked by use of the MONOTONICITY callback of the
expression handler for g̃. If g̃ is monotonically increasing in fk , then increasing fk
could render those constraints infeasible that could become infeasible if g̃ is increased
and decreasing fk could render those constraints infeasible that could become infeasible
when g̃ is decreased. Therefore, down- and uplocks stored for g̃ are added to the down-
and uplocks, respectively, of fk . If g̃ is monotonically decreasing in fk , then increasing
fk would decrease g̃ and decreasing fk would increase g̃. Therefore, the downlocks of g̃
are added to the uplocks of fk and the uplocks of g̃ are added to the downlocks of fk .
Finally, if no monotonicity of g̃ in fk could be concluded, then the sum of down- and
uplocks of g̃ are added to both the down- and uplocks of fk .
This procedure is applied for all expressions f1 , f2 , . . . and recursively to their suc-
cessors. When a variable expression is encountered, then the down- and uplocks in the
variable expression are added to the down- and uplocks of the variable. Therefore, in
difference to linear and many other types of constraints in SCIP, a variable in a single
constraint can get several down- or uplocks if it appears several times.
When constraints need to be “unlocked”, the same procedure is run, but down-
and uplocks are subtracted instead of added. To avoid that, due to tightened variable
bounds, different monotonicity information is used when removing locks, the calculated
monotonicity information is stored (removed) in an expression when it is locked the first
time (unlocked the last time).
For an example, consider again the expression from Figure 2 and the constraint
log(x)2 + 2 log(x) y + y 2 ≤ 4. Assume further that x = 0, x = 1, and y = 0. The locks
for x and y are deduced as follows, see also Figure 3:
1. One uplock and no downlock are assigned to the sum-node, because the constraint
has a finite right-hand side and no left-hand side.
2. Since every coefficient in the sum is nonnegative, every child of the sum is assigned
one uplock and no downlock.
24
3. log(x)2 is monotonically decreasing in log(x) because log(x) ≤ 0, so that the uplock
of log(x)2 is added to the downlocks of log(x).
4. 2 log(x) y is monotonically decreasing in log(x) because 2y ≤ 0, so the uplock of
2 log(x) y is added to the downlocks of log(x).
5. 2 log(x) y is monotonically decreasing in y because 2 log(x) ≤ 0, so the uplock of
2 log(x) y is added to the downlocks of y.
6. y 2 is monotonically decreasing in y because y ≤ 0, so the uplock of y 2 is added to
the downlocks of y.
7. log(x) is monotonically increasing in x, so the downlocks of log(x) are added to the
downlocks of x.
Thus, eventually both x and y receive 2 downlocks, one for each appearance of the
variables in the expression. Presolvers, primal heuristics, or other plugins of SCIP may
now use the information that increasing the value of these variables in any feasible
solution does not render this constraint infeasible.
P
up:1 down:0
log y
up:0 down:2 up:0 down:2
INC
x
up:0 down:2
The construction of the extended formulations requires algorithms that analyze an ex-
pression for specific structures, for instance, quadratic or convex subexpressions as in
the previous example. Following the spirit of the plugin-oriented design of SCIP, these
algorithms are not hardcoded into cons_nonlinear, but are added as separate plugins,
referred to as nonlinear handlers. Next to detecting structures in expressions, nonlin-
ear handlers can also provide domain propagation and linear relaxation algorithms that
act on these structures. These plugins have to interact tightly with cons_nonlinear
and nonlinear constraints. Therefore, in difference to other plugins in SCIP, nonlinear
handlers are managed by cons_nonlinear and not the SCIP core.
In fact, cons_nonlinear acts both as a handler for nonlinear constraints and as a
“core” for the management and enforcement of the extended formulations (MINLPdp ext )
and (MINLPlp ext ). As a constraint handler, it checks nonlinear constraints for feasibil-
ity, adds them to the NLP relaxation, applies various presolving operations (see Sec-
tion 4.2.7), handles variable locks, and more. When it comes to domain propagation,
25
separation, and enforcement of nonlinear constraints (see Sections 4.2.8–4.2.11), the
constraint handler decides for which constraints in the extended formulations domain
propagation or separation should be tried and calls corresponding routines in nonlin-
ear handlers. When separation fails in enforcement, the constraint handler also selects
a branching variable from a list of candidates that has been assembled by nonlinear
handlers.
Since domain propagation, separation, and enforcement is partially “outsourced” into
nonlinear handlers, a certain similarity of nonlinear handler callbacks to constraint han-
dler callbacks is not surprising. A nonlinear handler can provide the following callbacks:
− COPYHDLR: include nonlinear handler in another SCIP instance;
− FREEHDLRDATA: free nonlinear handler data;
− FREEEXPRDATA: free expression-specific data of nonlinear handler;
− INIT: initialization;
− EXIT: deinitialization;
− DETECT: analyze a given expression (hdp lp
i (·) and/or hi (·)) for a specific structure
and decide whether to contribute in domain propagation for hdp dp
i (·) = wi or linear
lp lp
relaxation of hi (·) ⋚i wi (implementation of this callback is mandatory);
− EVALAUX: evaluate expression with respect to auxiliary variables in descendants, that
is, compute hlp lp lp
i (x, wi+1 , . . . , wmlp );
− INTEVAL: evaluate expression with respect to current bounds on variables, that is,
compute an interval enclosure of (13);
− REVERSEPROP: tighten bounds on descendants, that is, compute interval enclosures
of (14) and (15) and update bounds xj , xj , wj , wj accordingly;
− INITSEPA: initialize separation data and add initial linearization of (16) to the LP
relaxation;
− EXITSEPA: deinitialize separation data;
− ENFO: given a point (x̂, ŵ), create a bound change or add a cutting plane that sepa-
rates this point from the feasible set; usually, this routine tries to improve the linear
relaxation of hlp lp lp lp
i (x, wi+1 , . . . , wmlp ) ⋚i wi ; if neither a bound change nor a cutting
plane was found, register variables for which reducing their domain might help to
make separation succeed;
− ESTIMATE: given a point (x̂, ŵ), compute a linear under- or overestimator of function
hlp lp lp
i (x, wi+1 , . . . , wmlp ) that is as tight as possible in (x̂, ŵ) and valid with respect to
either the local or global bounds on x and wlp ; further, register variables for which
reducing their domain might help to produce a tighter estimator.
More details on the exact input and output of these callbacks is given in the SCIP
documentation.
26
they are by default) and SCIP is not in presolve, then the slack variable wilp and the con-
straint hlp lp lp lp
i (x) ⋚i wi with hi ≡ gi are added to (MINLPext ). Thereby, ⋚i is decided
2
2w2dp y + y 2 . Similarly, the same or another nonlinear handler may decide that it can
provide a linear relaxation for the inequality if log(x) was replaced by a variable. It will
introduce an auxiliary variable w2lp and a constraint w2lp = log(x). Then it will change
log(x)2 + 2 log(x) y + y 2 ≥ w1lp to (w2lp )2 + 2w2lp y + y 2 ≥ w1lp . In addition, the nonlinear
handler then indicates that tight bounds for w2lp and y are required to compute the
linear relaxation. This again initiates the introduction of an auxiliary variable w2dp and
a constraint w2dp = log(x), given that they were not existing already.
To ensure that there always exist routines that can provide domain propagation and
linear relaxation for an expression, the “fallback” nonlinear handler default is avail-
able. This nonlinear handler resorts to callbacks of expression handlers (see Section 4.1)
2 To be exact, extended formulations are not created explicitly and slack variables are not created at
this state, but the top of the expression gi in the nonlinear constraints is marked for propagation and/or
separation. Variables wilp are added in SCIP when the LP relaxation is initialized. For simplicity, these
technicalities are omitted here.
27
to provide the necessary functionalities. However, while nonlinear handlers are usually
meant to handle larger parts of an expression, the methods implemented by the expres-
sion handlers are limited to the immediate children of an expression and thus have a
rather myopic view on the expression. Therefore, the DETECT callback of the default
nonlinear handler is called with a low priority. It then decides whether it contributes
domain propagation or linear relaxation depending on what other nonlinear handler
have declared before. If the nonlinear handler decides to contribute, it will introduce
auxiliary variables wdp and/or wlp for all immediate children of the current expression.
The other callbacks of the default nonlinear handler, in particular EVALAUX, INTEVAL,
REVERSEPROP, ESTIMATE, can then utilize the corresponding “myopic” callbacks of the
expression handlers.
4.2.7 Presolve
Simplify The simplify callbacks of expression handlers are called to bring the expres-
sions into a canonical form. For example, recursive sums and products are flattened
and fixed or aggregated variables are replaced by constants or sums of active variables.
See the documentation of function SCIPsimplifyExpr() for a more exhaustive list of
applied simplifications.
Common Subexpressions Subexpressions that appear several times are identified and
replaced by a single expression. This also ensures that every variable is represented by
only one variable-expression across all constraints and that for expressions that appear in
several nonlinear constraints at most one auxiliary variable is introduced in the extended
formulations. However, sums that are part of other sums are currently not identified,
since in the canonical form no sum can have a sum as a child. The same holds for
products. The HASH and COMPARE callback of the expression handlers are used to identify
common subexpressions.
Scaling For constraints for which the expression is a sum (which it always is if there
is a constant factor different from 1.0), it is ensured that the number of terms with
positive coefficients is at least the number of terms with negative coefficients by scaling
the constraint with −1. If there are as many positive as negative coefficients, then it is
ensured that the right-hand side is not +∞. This canonicalization step can be useful
for the next point.
Merge Constraints Nonlinear constraints that share the same expression are merged.
Constraint Upgrading Upgrades to other constraint types are checked. Most impor-
tantly, nonlinear constraints that are linear after simplification are replaced by con-
straints that are handled by cons_linear. Further, constraints that can be written as
(x − ax ) · (y − ay ) = 0 with x and y binary variables and ax , ay ∈ {0, 1} are replaced by
setpacking constraints.
28
Linearization of Binary Products Products of binary variables are linearized. This is
done in a way that is similar to previous SCIP versions [113], but the consideration of
cliques is new:
Q
− In the simplest case, a product i xi is replaced
V by a new variable z and a constraint
of type “and” is added that models z = i xi . The “and”-constraint handler will
then separate a linearization of this product [17].
− Optionally, for a product of only two binary variables, xy, the linearization can be
added directly as linear constraints (x ≥ z, y ≥ z, x + y ≤ 1 + z).
− For a product of two binary variables, xy, it is checked whether x (or its negation)
and y (or its negation) are contained in a common clique. Taking this information
into account allows for simpler linearizations of xy. For example, x and y being in a
common clique implies x + y ≤ 1 and thus xy = 0. Analogously, x + (1 − y) ≤ 1 gives
xy = x, (1 − x) + y ≤ 1 gives xy = y, and (1 − x) + (1 − y) ≤ 1 gives xy = x + y − 1.
P
− Replacing every product in a large quadratic term i,j Qij xi xj by a new variable
and constraint can increase the problemPsize enormously. SCIP therefore checks
whether there exist sums of the form xi j Qij xj (Qij 6= 0) with at least 50 terms
and replaces them by a single variable zi and the linearization
Qxi ≤ zi ,
zi ≤ Qxi ,
X
Q≤ Qij xj − zi + Qxi ,
j
X
Q≥ Qij xj − zi + Qxi ,
j
P P
where Q := j min(0, Qij ), Q := j max(0, Qij ). This usually gives a looser LP
relaxation as when each product xi xj would be replaced individually, but has the
advantage that less variables and constraints need to be introduced. Variable zi is
marked to be implicit integer if all coefficients Qij are integer. Variables xi that
appear in the highest number of bilinear terms are prioritized.
P
Identification of Integrality For constraints that can be written as i ai fi (x) + by = c,
b 6= 0, it is checked whether the variable type of y can be changed to implicitly integer.
Storing the information that a continuous variable can take only integer values in a
feasible solution can be useful in the solving process, for example, when branching on y.
To change the type of y, the following conditions need to be satisfied: y is of continuous
type, abi ∈ Z, cb ∈ Z, and fi (x) ∈ Z for solutions that satisfy integrality requirements of
(MINLP) (xI ∈ ZI ). To determine the latter, the INTEGRALITY callback of expression
handlers is used.
29
If x = 0 and x = 1, then x is transformed into a binary variable. Otherwise, a bound
disjunction constraint (x ≤ x)∨(x ≥ x) is added. This “upgrade” of continuous variables
to discrete ones has been shown to be particularly effective for box-QP instances.
Identification of Unlocked Linear Variables Since SCIP supports linear objective func-
tions only, problems with a nonlinear objective function are reformulated by the readers
of and interfaces to SCIP into one with a linear objective function (min f (x) becomes
min z s.t. f (x) ≤ z), To ensure feasibility of such artificial constraints, nonlinear con-
straints are checked for a variable xi , i ∈ {1, . . . , n}, that appears linearly and which
value could be increased or decreased in a solution without the risk of violating other
constraints (see also Section 4.2.3). When a solution candidate violates a nonlinear con-
straint where such a variable xi has been identified, the constraint handler postprocesses
this solution by adjusting the value of xi such that the constraint becomes feasible. This
modified solution is then passed on to primal heuristic “trysol”, which will suggest it to
the SCIP core the next time this primal heuristic is run.
Bound Tightening Domain propagation is run (see Section 4.2.8) to tighten variable
bounds and identify redundant or always-infeasible constraints (g([x, x]) ⊆ [g, g] or
g([x, x]) ∩ [g, g] = ∅). The extended formulation (MINLPdp
ext ) for domain propagation is
constructed to make use of the INTEVAL and REVERSEPROP callbacks of nonlinear han-
dlers. Further, bounds that are implied by the domain of expressions are enforced, if
possible, such as the lower bound for arguments of log(x) or xp with p 6∈ Z are set to a
small positive value.
{x ∈ [x, x] : g j ≤ gj (x) ≤ g j }
30
bounds), then hdp i is queued for backward propagation. The backward propagation
queue is then processed in a breadth-first-order. Each time the interval enclosure of
(15) provides a sufficient tightening for [wdp dp dp
j , w j ], hj is appended to the backward
propagation queue. If a bound tightening for some xj is derived from (14), constraints
that contain xj are marked for propagation again, so that another forward pass may
start after the current backward pass.
The following mentions a few more subtleties.
Auxiliary Variables in (MINLPlp ext ) Recall that the DETECT callback of a nonlinear han-
dler can request bound updates for auxiliary variables in (MINLPlp ext ), see Section 4.2.5.
dp dp
Thus, if hi is not only associated with wi but also an auxiliary variable wilp′ , then the
bounds on wilp′ are tightened, too.
Reducing Side Effects The bounds computed in a backward pass are stored separately
from those computed by the forward pass. That is, tightened bounds on wdp are not
immediately used to compute the bounds on functions that use wdp . Instead, a bound
tightening on an auxiliary variable in the backward pass first has to result in a bound
change on an original variable x, which should then result in tighter bounds computed
by the forward pass. A reason for this implementation detail is that it is tried to reduce
side-effects from the backward propagation in a node of the branch-and-bound tree on
the domain propagation in another node of the tree. With the current implementation,
the domain propagation in a node only depends on the bounds of SCIP variables x and
wlp , but not bounds on wdp that were computed by backward propagation in a different
part of the tree.
Handling Rounding Errors in Variable Bounds and Constraint Sides While the do-
main propagation in the constraint handler and expression and nonlinear handlers are
implemented by using interval arithmetics with outward-rounding, this is not the case
for many other parts of SCIP. For this reason, variable bounds are relaxed by a small
amount when entering the forward pass. Since these small relaxations result in overes-
timates for the intervals of all following computations, several cases deserve a special
treatment:
− By default, a bound b is relaxed by 10−9 max(1, |b|).
− If, however, the domain width is small, but the bound itself is large, then relaxing
by 10−9 |b| can have a large impact. Therefore, bound relaxation is additionally
restricted to 10−3 times the width of the domain.
− Bounds on integer variables (including implicit integer) are not relaxed.
− Since integral values, especially 0, often have a special meaning, bounds are not
relaxed beyond the next integer value.
Constraint sides are relaxed by a small amount, too. Here, an absolute relaxation of
10−9 is applied.
Finally, also when updating existing bounds in original or auxiliary variables with
newly computed ones, the latter are slightly relaxed if the new interval has a nonzero
distance of less than numerics/epsilon=10−9 to the existing domain. That is, instead
of concluding infeasibility for the current subproblem, the variable is fixed to the bound
that is closest to the new interval.
31
Special Case: Redundancy Check When checking whether a constraint can be deleted
because it is redundant, it needs to be ensured that the constraint is also satisfied for a
solution that violates variable bounds by a small amount. Otherwise, a feasibility check
for the solution in the original problem can fail. Hence, when doing a forward pass
for the redundancy check, bound for all unfixed variables are relaxed by the feasibility
tolerance of SCIP, independent of the variable type. Further, constraint sides are relaxed
by the feasibility tolerance as well.
Stopping Criterion In the backward pass, only tightenings that do sufficient progress
on the bounds of variables wdp and x are usually applied. This is to avoid many rounds
of bound tightening that do only little progress. New bounds are considered sufficiently
better than previous ones if the variable gets fixed, the relative improvement on a bound
is at least numerics/boundstreps=5%, or a bound changes sign, i.e., is moved to or
beyond zero.
After presolve, SCIP calls the constraint handlers to initialize their data structures for
the branch-and-bound process and to initialize the linear and nonlinear relaxations of
the problem. For cons_nonlinear, the following operations are performed.
Nonlinear Relaxation For each constraint of (MINLP), a simple check for convexity
and concavity of function gi (x) on [x, x] is done. This uses the CURVATURE callback of
the expression handlers. The constraint is added to the NLP relaxation of SCIP and
the row in the NLP is marked as convex or concave, if possible. This information is
picked up by other plugins that work on a convex nonlinear relaxation of the problem,
for example, sepa_convexproj and prop_nlobbt.
32
Collect Square and Bilinear Terms All expressions in (MINLPlp ext ) that are of the form
xy or x2 (where x and y can be either original or auxiliary variables) are collected in
a data structure that is easy to traverse and search. This is used by some plugins that
work on bilinear terms (Sections 4.5, 4.9).
4.2.10 Separation
After SCIP solved the LP relaxation for a node of the branch-and-bound tree, it calls the
separator callback of the constraint handlers and separators to check whether a cutting
plane that separates the current LP solution (x̂, ŵ) is available. For a constraint g i ≤
gi (x) ≤ g i of (MINLP) that is violated by x̂, the corresponding extended formulation is
checked for separating cutting planes. During separation only “strong” cuts desired, by
what cutting planes that are more than just barely violated by (x̂, ŵ) are meant. The
quantification of “more than just barely” is left to the separation algorithm (discussed
below and in the following sections).
First, if hlp lp lp
i (x, wi+1 , . . . , wmlp ) ⋚i wi is violated by (x̂, ŵ), then the nonlinear handlers
that registered to contribute to the linear relaxation of this constraint are called. For a
nonlinear handler that implements the ENFO callback, it is left completely to the nonlinear
handler to decide how to separate (x̂, ŵ) from (16). The callback is also informed that
only “strong” cuts are desired and candidates for branching are not collected. If the ENFO
callback is not implemented, then the ESTIMATE callback must be implemented. Thus,
a linear under- or overestimator of hlp i (·) is requested from the nonlinear handler and
completed to a cutting plane. The cutting plane is deemed as “strong” if the estimator
is sufficiently close to the value of hlp i (·) in (x̂, ŵ).
Formally, assume that hlp i (x̂, ŵ lp lp lp
i+1 , . . . , ŵmlp ) > ŵi and that a nonlinear handler
provides a linear underestimator ℓ(x, wi+1 , . . . , wmlp ) of hlp
lp lp
i (·) with respect to current
lp lp lp lp lp lp
local variable bounds. If ℓ(x̂, ŵi+1 , . . . , ŵmlp ) > ŵi , then ℓ(x, wi+1 , . . . , wm lp ) ≤ wi is a
cutting plane that separates (x̂, ŵ) and is valid for the current branch-and-bound node
(and it is valid globally if ℓ(·) does not depend on local variable bounds). Further, the
cut is regarded as strong if
lp lp lp lp lp lp lp
ℓ(x̂, ŵi+1 , . . . , ŵm lp ) ≥ ŵi + α(hi (x̂, ŵi+1 , . . . , ŵmlp ) − ŵi ), (17)
inspected and the nonlinear handler associated with hlp i′ are called for separation or linear
under-/overestimation.
Again, some more subtleties are discussed next.
Constraints to Separate Separation is not called for every violated nonlinear constraint
of (MINLPlp lp lp lp
ext ). For a subexpression hi′ (·) of hi (·) (including hi (·) itself), i ∈ {1, . . . , m},
separation is only called if the absolute violation of
hlp lp lp
i′ (x, wi+1 , . . . , wmlp ) ⋚i wi
33
constraints. In terms of the first example from Section 4.2.2 (log(x)2 +2 log(x)y+y 2 ≤ 4),
this means that if the violation of (w2lp )2 + 2w2lp y + y 2 = w1lp is very small in compari-
son to the violation of log(x)2 + 2 log(x)y + y 2 ≤ 4, then separation for the quadratic
equation is suspended until the violation of log(x) = w2lp has been sufficiently reduced.
Further, separation is skipped for nonlinear constraints of (MINLPlp ext ) if their abso-
lute violations is below the feasibility tolerance of SCIP, as no strong cuts are expected
in this case.
Linearization in Incumbents In the last decades, solvers for convex MINLPs have
demonstrated that the choice of the reference point in which to linearize convex nonlin-
ear constraints is essential. While using the solution of the LP relaxation still leads to
a convergent algorithm [54], better performance is achieved by using a reference point
that is close to or at the boundary of the feasible region [26, 97]. Therefore, also the
new implementation of cons_nonlinear includes a feature where feasible solutions are
used as reference points to generate cutting planes.
That is, whenever a primal heuristic finds a new feasible solution x∗ , SCIP iter-
ates through the nonlinear constraints of (MINLPlp lp ∗
ext ) in reverse order, sets (wi ) :=
lp ∗ lp ∗ lp
hi (x , (wi+1 ) , . . . , (wmlp )∗ ) and calls the ESTIMATE callback of the registered nonlinear
34
handler (if it implements this callback) with (x∗ , w∗ ) as reference point. If a globally
lp lp ∗ lp ∗ lp ∗
valid underestimator ℓ(x, wi+1 , . . . , wm lp ) is returned with ℓ(x , (wi+1 ) , . . . , (wmlp ) ) =
lp ∗ lp lp lp
(wi ) (that is, it is supporting hi (·) at (x∗ , w∗ )), then the cut ℓ(x, wi+1 , . . . , wmlp ) ≤ wilp
is added to the cutpool of SCIP. Overestimators are handled analogously. However, since
this feature gave mixed computational results when it was added, it is currently disabled
by default (parameter constraints/nonlinear/linearizeheursol).
4.2.11 Enforcement
The enforcement callbacks of constraint handlers are the ones where resolving infea-
sibility of solutions has to be taken most seriously. While domain propagation and
separation callbacks are allowed to return empty-handed, the enforcement for nonlin-
ear constraint needs to find some action to enforce violated nonlinear constraints in a
given solution point. Especially when points are almost feasible, i.e., when violations
in (MINLPlp ext ) are small (reconsider also the motivating example from Section 4.2.1),
enforcing constraints can be difficult and some measures taken may appear desperate.
In summary, the constraint handler attempts to enforce constraints of (MINLP)
by separation on (MINLPlp dp
ext ), domain propagation on (MINLPext ), or branching on a
variable xi , i ∈ {1, . . . , n}.
In the unlikely case that no relaxation has been solved, then the constraint handler is
asked to enforce the pseudo-solution (ENFOPS callback), that is, a vertex of the variables
domain with best objective function value. In this case, domain propagation is called (see
Section 4.2.8). If no bound change is found and infeasibility of the node is not concluded,
then all variables in all violated nonlinear constraints and with domain width larger than
ε (numerics/epsilon) are registered as branching candidates. The branching rules of
SCIP for external branching candidates will then take care of selecting a variable for
branching. If no branching candidate could be found, then it is not clear whether there
is no feasible solution left in the current node (though relevant domains are tiny). In
this case, the constraint handler instructs SCIP to solve the LP relaxation.
When the constraint handler has to enforce a solution (x̂, ŵ) of the LP relaxation
(ENFOLP callback), then the following steps are taken:
1. The violation of the solution in (MINLP) and (MINLPlp g h
ext ) is analyzed. Let v , v ,
b
and v , be the maximal absolute violation of the nonlinear constraints in (MINLP),
the nonlinear constraints in (MINLPlp ext ), and the bounds of variables in nonlinear
constraints (x, x, w, w), respectively. Further, let tolfeas be the feasibility tolerance of
SCIP (numerics/feastol), tollp be the current primal feasibility tolerance of the LP
solver, and ε be the value of numerics/epsilon. By default, tolfeas = tollp = 10−6
and ε = 10−9 . Thus, if v g ≤ tolfeas , then all nonlinear constraints are satisfied with
respect to SCIPs feasibility tolerance and no enforcement is necessary. Further, note
that SCIP itself already ensures v b ≤ tollp and tollp ≤ tolfeas .
2. If v b > v h , that is, the violation of variable bounds is larger than violations of
the nonlinear constraints in (MINLPlp ext ), then chances to derive cutting planes from
(MINLPlp ext ) that separate (x̂, ŵ) are low. This is because methods that work on
nonconvex constraints often take variable bounds into account and do not work well
when the reference point is outside these bounds. Hence, if v b > v h and tollp > ε,
then tollp is reduced to max(ε, v b /2) and a resolve of the LP is triggered.
3. If v h < tollp , that is, violations of the nonlinear constraints in (MINLPlp
ext ) are below
the feasibility tolerance of the LP solver, then deriving a valid cut that is violated in
the current LP solution by more than tollp can be very difficult. Therefore, if also
tollp > ε, then tollp is reduced to max(ε, v h /2) and a resolve of the LP is triggered.
35
4. The separation algorithm from Section 4.2.10 is called with some additional flags that
indicate that it is called from the enforcement callback. These additional flags extend
the separation algorithm as follows.
− When the ENFO or ESTIMATE callbacks of a nonlinear handler are called, then they
are instructed to register variables xj or wilp for branching, if useful. A variable
should be registered as a branching candidate if branching on that variable could
result in finding tighter cutting planes on the resulting subproblems. Usually, this
is the case when a convexification gap was introduced due to convexification of a
nonconvex function with respect to the current variable domain. Thus, nonlinear
handler that underestimate convex expressions usually do not register branching
candidates.
− A forward pass of domain propagation in (MINLPdp ext ) (see Section 4.2.8) is run
to ensure that recent bound tightenings are taken into account.
− Recall that for a violated constraint g i ≤ gi (x) ≤ g i with i ∈ {1, . . . , m}, constraint
hlp lp lp lp
i (x, wi+1 , . . . , wmlp ) ⋚i wi and subexpressions of hi are tried for separation. If
for none of them a “strong” cut could be found, no branching candidate was regis-
tered, and the violation of constraint g i ≤ gi (x) ≤ g i is at least 0.5 v g (parameter
constraints/nonlinear/weakcutminviolfactor), then separation is repeated
without the requirement that cutting planes need to be “strong”.
− Dropping the requirement for “strong” cuts has various consequences on the sep-
aration algorithm described in Section 4.2.10: The requirement that the absolute
violation of constraints of (MINLPlp ext ) is at least tol
feas
is dropped (recall again
the motivating example from Section 4.2.1 where a solution was feasible with
respect to tolfeas for (MINLPlp ext ) but not feasible for (MINLP)).
Instead of (17), it is now sufficient that the violation of the cutting plane in (x̂, ŵ)
is at least tollp .
The cleanup of the cut is modified to take the minimal violation tollp into account.
That is, if the violation is in [10ε, tollp ], then it is scaled up to reach a violation
of 10−4 (parameter separating/minefficacy(root)), if possible3 , or at least
tollp . Step 2 in the original cut cleanup (scale to get coefficients into [10−4 , 104 ])
is replaced by scaling down the cut to achieve |a1 | < 10/tolfeas , if this is possible
without the violation to drop under tollp .
Since cuts with violations that are just barely above feasibility tolerances are
allowed, it is tried to ensure that floating-point rounding-off errors do not falsify
the magnitudeP of the calculated violation. For that, it is required that the violation
of the cut j aj xj ≤ b is sufficiently large when compared to the terms of the cut,
i.e., at least 2−50 max(|b|, maxj |aj x̂j |) is required. The value 50 has been chosen
because the mantissa of a floating-point number in double precision has 52 bits.
The cut cleanup procedure is instructed to record for which variables it has mod-
ified coefficients in order to achieve the desired coefficient range or to avoid co-
efficients to be within ε of an integral value. If the cleanup failed to produce a
violated cut, then these variables are registered as branching candidates (auxil-
iary variables may be mapped onto original variables, though, see Section 4.2.12).
The motivation is that since a bound of these variables was used to relax the cut,
having a smaller domain may result in less relaxation and thus a higher chance
to find a violated cut.
Case Study Most of the “cut cleanup” routines have been added to improve numer-
ical stability on test instances. One of the more peculiar cases is detailed in the
following. On instance ex1252 from MINLPLib, constraint e4 is originally given as
3 See implementation of SCIPcleanupRowprep() for more details.
36
−6.52(0.00034x6 )3 − 0.102(0.00034x6 )2 x12 + 7.86 · 10−8 x212 x6 + x3 = 0 (coefficients have
been rounded). After simplification of expressions, this is represented in SCIP as x3 −
2.54 ·10−10 x36 −1.17 ·10−8 x26 x12 +7.86 ·10−8 x6 x212 = 0. When (MINLPlp ext ) is constructed,
none of the specialized nonlinear handlers detect a structure, so nonlinear handler “de-
fault” introduces an auxiliary variable for each nonlinear term. The resulting constraint4
−10
in (MINLPlp ext ) is x3 − 2.54 · 10 w13 − 1.17 · 10−8 w14 + 7.86 · 10−8 w16 = w12 . Assume
that in a solution the value in the left-hand side is below the one on the right-hand side.
Though the constraint is actually linear, enforcing it uses the separation procedures of the
nonlinear handler. Therefore, the cut that is generated via the help of the expression han-
dler “sum” is, not surprisingly, x3 −2.54·10−10 w13 −1.17·10−8 w14 +7.86·10−8 w16 ≥ w12 .
This cut is marked as globally valid. Next, the cut cleanup procedure is run and rec-
ognizes that the coefficient range is ≈ 1010 > 107 . It then uses the variable bounds
at the current node to eliminate variables from the cut until the coefficient range is
sufficiently reduced. Apparently, the least relaxation is necessary if the terms for x3 ,
w12 , and w13 are removed. The resulting cut, now only valid for the current node, is
7.86 · 10−8 w16 − 1.17 · 10−8 w14 ≥ −13.94, which turns out to be no longer violated by
the solution to be separated. Since the relaxation of the cut used the bounds of x3 ,
w12 , and w13 , the only choice left to resolve the violation is to tighten these bounds.
Therefore, variables x3 and x6 (due to w13 = x36 ) are registered as branching candidates
(in the current implementation, only the left-hand side of constraints in (MINLPlp ext ) are
considered). In a later node, the whole procedure repeats, but since variable bounds are
tighter, cut cleanup results in the cut 7.86 · 10−8 w16 − 1.17 · 10−8 w14 ≥ −12.68, which has
a higher chance to be violated. Eventually the instance can be solved to a gap below 1%,
but the challenging numerical properties and the costly way they are currently handled
take their toll on the performance.
5. If the separation algorithms were not successful, but branching candidates have been
collected, then these candidates are either passed on to the SCIP core as external
branching candidates or the branching rules of the nonlinear constraint handler are
employed. The latter is currently the default (parameter constraints/nonlinear/
branching/external) and described in more detail in Section 4.2.12 below.
In most situations, it is either possible to separate an infeasible solution or to find
a variable in a nonconvex term such that branching on that variable should reduce
the convexification gap, which would allow for a tighter linear relaxation. However,
enforcement needs to handle also the less likely situations where neither separation nor
branching was successful. This leads to the following (less strategical) attempts.
6. If v b > ε, then tollp is reduced to max(ε, v b /2) and a resolve of the LP is triggered.
As in Step 2, the hope is that separation methods will work better if the LP solution
is less outside the variable bounds.
7. If v h > ε and tollp > ε, then tollp is reduced to max(ε, v b /2) and a resolve of the LP
is triggered. The hope here is that i) less tolerance on the feasibility for previously
generated cuts may lead to a feasible solution, and ii) more cuts can be added if the
minimal required violation is reduced.
8. Domain propagation (Section 4.2.8) is run in the hope that some bound change that
hasn’t previously been found in the separation-and-propagation loop for the current
node is discovered now. This bound change may separate the current LP solution or
have an influence on the next separation attempts.
9. Any unfixed variable in violated nonlinear constraints is registered as external branch-
ing candidate. SCIP then branches on one of these variable and the hope is that
4 The attentive reader observes that the constraint handler is partially responsible for its own misery
here by naively replacing each monomial by an auxiliary variable. Adding an automated scaling for
newly introduced variables or being more considerate in the simplification step may help here.
37
infeasibilities in child nodes will be easier to resolve. Note that when the domain
width of a variable is reduced to less than ε, then the variable is treated as if fixed
to a single value.
10. If all variables in violated constraints are fixed, then it may be the overestimation
of variable bounds that prevented domain propagation to conclude that the current
node is infeasible. The node will be cut off and a message issues to the log.
The constraint handler collects statistics on how often it added “weak” cuts, tightened
the LP feasibility tolerance (tollp is reset to tolfeas whenever processing of a new node
starts), branched on any unfixed variable, etc. The occurrence of such behavior is
an indication that SCIP has numerical problems to solve this instance. To see these
statistics, enable parameter table/cons_nonlinear/active.
Finally, if the constraint handler has to enforce a solution of a relaxation other than
the LP (ENFORELAX), then almost the same algorithm is run as for enforcing LP solutions.
The only differences are that i) tolfeas is used instead of tollp as minimal required cut
violation and ii) reduction of tollp is omitted. Note, that the enforcement of relaxation
solutions has not been tested and would probably require some patching up to work
reliably.
4.2.12 Branching
The handler for nonlinear constraints now includes its own branching rule to select a
variable for branching among a number of candidates. The candidates are variables
that usually appear in nonconvex expressions of violated nonlinear constraints and are
collected while trying to find a cutting plane that separates a given relaxation solution
(Step 4 in the previous section). Branching on such a variable should reduce the gap
that is introduced by convexifying the nonconvex expression in both children because
this gap is typically proportional to the domain width.
Mapping Constraint Violation onto Variables Within the ESTIMATE and ENFO callbacks
of a nonlinear handler, the handler should register with the constraint handler those
variables of (MINLPlp ext ) where branching could potentially help to produce tighter esti-
mators or cutting planes. With the branching candidates a “violation score” is enclosed,
which typically is the relative violation of the nonlinear constraint in (MINLPlpext ) that
is currently handled,
v |hlp lp lp lp
i (x̂, ŵi+1 , . . . , ŵmlp ) − ŵi |
s := (18)
max(1, |ŵilp |)
This value serves as a proxy for the convexification gap associated with hlp i (·).
For each branching candidate, the number of violation scores that have been added,
the maximal score, and the sum of scores are stored. If a nonlinear handler registers only
one branching candidate for an expression, then the value sv can be added to the score
of that variable immediately. For a multivariate function hlp i (·), several candidates may
be registered, which requires distributing the sv onto several variables. Let xi1 , . . . , xik ,
{i1 , . . . , ik } ⊆ N , be such a set of variables. Let ku be the number of unbounded
variables in this set,
If ku > 0, then to each unbounded variable an equal part of the violation score is assigned.
That is, variable xij is assigned the score
( v
ku , if xij = −∞ or xij = ∞,
s
(19)
0, otherwise.
38
Hence, only unbounded variables are considered for branching. This is because the
computation of a linear outer-approximation of (16) often depends on the presence
of variable bounds. If all variables are bounded, the following variable weights are
considered instead:
max 0.05, min(x̂ij −xij ,xij −x̂ij ) , if x 6= x ,
xij −xi ij ij
λj := j j = 1, . . . , k. (20)
0, otherwise,
Value λj ∈ [0.05, 0.5] measures the “midness” of the current solution point with respect to
the variables domain. Larger shares of the violation score are then assigned to variables
that are closer to the middle of the domain:
λj
Pk sv . (21)
j ′ =1 λj ′
This choice is inspired by the observation that the convexification gap is typically smallest
at the boundary of the domain. Further, since a value close to x̂ij is typically selected
as branching point, this choice prefers variables that lead to children in the branch-and-
bound tree with similar domain sizes. The following alternatives to the weights (20)
for a bounded unfixed variable xij can be chosen (parameter constraints/nonlinear/
branching/violsplit):
uniform: 1.0
domain width: xij − xij
10 log10 (xj − xij ),
if xj − xij ≥ 10,
1
logarithmic scale of domain width: −10 log10 (xj −xi ) , if xj − xij ≤ 0.1,
j
x − x , otherwise.
j ij
Auxiliary Variables While the choice of notation in the previous section implied that
violation scores would only be distributed onto original variables xi , i ∈ N , it is clear
that the same formulas can be used if some or all variables are auxiliary variables of
the extended formulation (MINLPlp ext ). However, recall that an auxiliary variable wi
lp
is essentially just a proxy for a subexpression that is defined with respect to original
variables and other auxiliary variables wilp′ , i′ > i. Due to this construction, branching
on original variables xi is usually preferred, as this tightens not only the bounds on
xi directly but also the bounds on one or several auxiliary variables implicitly (see
Section 4.2.8). On the other hand, there may be situations where branching on auxiliary
variables could be preferable (after all, such branching could tighten bounds on original
variables via domain propagation, too) as it has a more direct effect on the bounds on
auxiliary variables. As we have not come up with an intuitive criterion on when to allow
branching on auxiliary variables, currently only the minimal depth required for nodes
in the branch-and-bound tree to allow branching on auxiliary variables can be specified
(parameter constraints/nonlinear/branching/aux). The default is to never branch
on auxiliary variables, though. Therefore, when a nonlinear handler registers a set of
variables and a violation score for branching, each auxiliary variable wilp in this set is
replaced by the variables that are appear in hlp lp lp
i (x, wi+1 , . . . , wmlp ). This is repeated
until only original variables are left. The violation score is then distributed among this
set of original variables.
If during enforcement, separation failed to find a cut (recall Step 4 in Section 4.2.11),
all variables with an assigned violation score are collected by default. Optionally, only
candidates from constraints which violation is a certain factor of v g are considered for
branching. However, this factor is by default 0 (parameter constraints/nonlinear/
branching/highviolfactor).
39
Branching Candidate Scores Let xi1 , . . . , xik , {i1 , . . . , ik } ⊆ N , be the set of branching
candidates. With each candidate, up to five different scores are associated.
The violation score was already introduced in the previous paragraph. When for a
variable several violation scores have been added, then currently the sum of these values
is used:
A value ψi+j /ψi−j is deemed reliable if it has been updated at least twice (constraints/
nonlinear/branching/pscostreliable). The pseudo-cost score is not computed for
problems with constant objective function (c = 0 in (MINLP)).
The domain score aims on giving preference to variables with a domain that is not
very large or very small. The motivation is that relatively large domains may require
many branching operations until their domain is small enough to allow for a useful linear
relaxation and branching on relatively small domains may not reduce the convexification
gap considerably anymore. The domain score is therefore largest for domains of width 1
and slowly decreases for larger and smaller domains:
(
b log10 (2 · 1020 /(xij − xij )), if xij − xij ≥ 1,
sj :=
log10 (2 · 1020 max(ε, xij − xij )), otherwise,
The appearance of 1020 in this formula is due to the implicit bound of 1020 (numerics/
infinity) that SCIP applies to unbounded variables. Thus, in this formula, xij and
xij should be understood as ±1020 if at infinity.
40
The integrality score aims on giving preference to variables that are of integral type
because the domains of integer branching variables in child nodes does not overlap.
Further, binary variables are preferred over integer variables as branching on a binary
variable will fix it in both children. The score is defined as
1.0, if ij ∈ I, xij = 0, xij = 1,
0.1, if ij ∈ I, xij 6= 0 or xij 6= 1,
sij :=
0.01, if ij ∈ N \ I, xij has been marked to be implicitly integer,
0.0, otherwise.
Finally, the dual score is a coarse idea that tries to evaluate the importance of
violation scores (18) from the perspective of the dual bound that the LP relaxation
provides. Assume that for a constraint hlp lp lp lp
i (x, wi+1 , . . . , wmlp ) ≤ wi of (MINLPext )
lp
lp lp lp lp
a cut ℓ(x, wi+1 , . . . , wm lp ) ≤ wi , where ℓ(·) is a linear underestimator of hi (·), was
added to the LP. If µ denotes the dual variable associated with this cut in the LP, then
lp lp lp
this cut contributes µ(ℓ(x, wi+1 , . . . , wm lp ) − wi ) to the Lagrangian function of the LP
relaxation. If instead of the cut the function hlp i (·) could have been used in the LP, then
this would change the value of the Lagrangian function by µ(hlp lp lp
i (x, wi+1 , . . . , wmlp ) −
lp lp
ℓ(x, wi+1 , . . . , wmlp )). Therefore, this product of dual variable and convexification gap
is used to evaluate the importance that the linear relaxation of this nonlinear constraint
has on the dual bound that is provided by the LP.
In the current experimental implementation, the convexification gap
|hlp ˆ ˆ lp ˆ lp ˆ ˆ lp ˆ lp
i (x̂, ŵi+1 , . . . , ŵmlp ) − ℓ(x̂, ŵi+1 , . . . , ŵmlp )|
in the LP solution (x̂, ˆ ŵ)ˆ at the time the cut is generated is stored together with the cut.
(The use of the absolute value is to accommodate overestimators from the case where ⋚i
is ≥). To compute the dual score sdj of a variable xij , for all rows in the LP that contain
xij and that were generated from a nonlinear constraint of (MINLPlp ext ), the quantities
lp ˆ ˆ lp ˆ lp ˆ ˆ lp ˆ lp
|µ̂(hi (x̂, ŵi+1 , . . . , ŵmlp ) − ℓ(x̂, ŵi+1 , . . . , ŵmlp ))| are added. Here, µ̂ refers to the dual
value of the cut in the current LP solution. The current implementation has a number
of disadvantages that will need to be addressed before the dual score could be usable
by default. For example, it would obviously be better to use the convexification gap in
the current LP solution instead of x̂. ˆ Further, cuts may be defined in terms of auxiliary
variables, but branching is done on original variables only. Thus, the replacement of
auxiliary variables by original variables (see paragraph “Auxiliary Variables” above)
would need to be considered here as well.
In a final step, the scores svj , spj , sbj , sij , sdj are aggregated into a single score for each
variable. For that, weights γ v , γ p , γ b , γ i , γ d are used, which can be set by parameters
constraints/nonlinear/branching/*weight and default to γ v = 1.0, γ p = 1.0, γ b =
0, γ i = 0.5, γ d = 0. Since the scores can be of different magnitudes, they are scaled
by the maximal score in each category. Thus, let svmax := maxj=1,...,k svj and similar for
spmax , sbmax , simax , sdmax . Further, the case that pseudo-cost scores may not be available
for each variable needs to be considered. Therefore, for a variable where pseudo-cost
scores are available, the final score is computed as
sv sp sb si sd
γ v sv j + γ p spmax
j
+ γ b s b j + γ i si j + γ d sd j
sfj := max max max max
.
γv + γp + γb + γi + γd
If a pseudo-cost score is not available, then the other scores are magnified:
sv sb si sd
γ v sv j + γ b sb j + γ i si j + γ d sd j
sfj := max max max max
.
γv + γb + γi + γd
41
Branching Variable and Coordinate Since the variable scores are rather a heuristic
guideline than a clear indication which variable is “best”, the code chooses from all vari-
able with final score at least 0.9 maxj=1,...,k sfj (constraints/nonlinear/branching/
highscorefactor) uniformly at random. This allows to exploits performance variabil-
ity due to branching decisions by changing the seed for the random number generator
(randomization/randomseedshift).
The branching point selection rule has not been changed since the last SCIP release.
For a bounded variable xj , a value x̃j between x̂j and 12 (xj + xj ) is chosen, see also
Section 4.4.5 of the SCIP Optimization Suite 7.0 release report [35]. Two child nodes
are created, one with xj ≤ x̃j and another with xj ≥ x̃j , if j 6∈ I. For j ∈ I, domains
are ensured to be disjoint (xj ≤ bxj c, xj ≥ bxj c + 1).
The quadratic nonlinear handler detects quadratic expressions, provides specialized do-
main propagation, and generates intersection cuts.
X
k X
q(y) = qi (y) with qi (y) = ai yi2 + ci yi + bi,j yi yj (22)
i=1 j∈Pi
for any yi that is an expression and not yet a variable, but with two notable exceptions.
If a variable yi appears only in a square term of (22) (ai 6= 0, ci = 0, i 6∈ Pi′ for all
i′ = 1, . . . , k), then an auxiliary variable is introduced for yi2 instead of yi . Similarly,
if two variable yi and yj appear only as one bilinear term yi yj (ai = 0, aj = 0, ci =
0, cj = 0, Pi = {j} or Pj = {i}), then an auxiliary variable is introduced for yi yj
instead of yi and yj . That is, a non-propagable part of (22) is split off and treated as if
linear since this part does not suffer from the dependency-problem and sometimes better
domain propagation routines are available for the single terms yi2 or yi yj (for an example
42
see the bilinear nonlinear handler described in Section 4.5). For an example, consider
xy + z 2 + z, which is propagable because z appears twice. However, for (MINLPdp ext ),
the reformulation w + z 2 + z, w = xy, is applied. The quadratic nonlinear handler
then handles domain propagation for w + z 2 + z, while either the default or the bilinear
nonlinear handler handle domain propagation for w = xy. An additional advantage of
this division of work is that for other expression where xy appears, variable w and its
domain information can be reused.
For separation ((MINLPlp ext )), the nonlinear handler registers itself for participation
if intersection cuts are enabled (nlhdlr/quadratic/useintersectioncuts, currently
disabled by default), no other nonlinear handler (for example the SOC nonlinear handler)
handles separation yet, and the corresponding constraint in (MINLPlp ext ) is nonconvex.
To decide the latter, the eigenvalues and eigenvectors of the quadratic coefficients matrix
(defined by ai and bi,j of (22)) are calculated via LAPACK and stored for later use. To
construct (MINLPlp lp
ext ), the nonlinear handler requests an auxiliary variable (w ) for any
yi that is an expression and not yet a variable. Thus, even when the quadratic nonlinear
handler participates in both domain propagation and separation, the created extended
formulations may differ if parts of (22) are not propagable. This flexibility is a feature
of the current design.
A nonlinear handler can choose whether it solely wants to be responsible for domain
propagation or separation, or only wants to participate in addition to other routines. The
separation by the quadratic nonlinear handler is such a case, i.e., the nonlinear handler
informs to the constraint handler that other possible nonlinear handlers should also be
requested for separation. Currently, this means that the default and bilinear nonlinear
handler will become active, auxiliary variables will be introduced for each square and
bilinear term, and corresponding under- and overestimators will be computed by these
routines if an intersection cut was not generated. These nonlinear handlers are also the
only ones that register branching candidates. For intersection cuts, bound information
is not used explicitly by default and the quadratic nonlinear handler does not register
variables for branching.
The goal of domain propagation is to use existing bounds on y and q(y) in (22) to derive
possibly tighter bounds on q(y) and y, respectively. The implementation is similar to
the one of cons_quadratic in SCIP 7 and before [113], but backward propagation has
been extended. For simplicity, the special treatment for some square or bilinear terms
as mentioned in the previous section is disregarded here.
43
computed by reduction to and solving of a univariate quadratic interval equation [24]:
X X
k
ai yi2 + (ci + bi,j [y j , y j ])yi ∈ [q, q] − [q i′ , q i′ ]. (23)
j∈Pi i′ =1,i′ ̸=i
A downside of this approach is that bounds for variables that appear less often may
not be deduced. For example, consider y12 + y1 y2 + y1 y3 + y2 y3 + y3 . As y2 has less
appearances than y1 and y3 , this quadratic gets partitioned into q1 (y) = y12 +y1 y2 +y1 y3 ,
q2 (y) = 0, and q3 (y) = (y3 + y3 y2 ). Therefore, no bounds are computed for y2 in
backward propagation. The quadratic constraint handler of SCIP 7 handled the case
of qi ≡ 0 in certain situations where yi appeared in only one bilinear term. For SCIP
8, this has been generalized. In the example, a bound on y2 is obtained by rewriting
as y2 + y3 ∈ ([q, q] − [q 3 , q 3 ])/y1 − y1 , finding the min/max of the function on the right-
hand side, and using this interval for backward propagation on y2 + y3 . In general, after
solving (23), the quadratic equation q(y) ∈ [q, q] is interpreted as
X 1 X
k
ci + bi,j yj ∈ [q, q] − [q i′ , q i′ ] − ai yi ,
yi ′ ′
j∈Pi i =1,i ̸=i
the min/max of the univariate interval function on the right-hand Pside are calculated,
and the resulting interval is used for backward propagation on ci + j∈Pi bi,j yj .
For separation, assume the constraint of (MINLPlp ext ) is q(y) ≤ w with q(y) as in (22)
lp
and w an auxiliary variable of (MINLPext ). Further, assume that q(y) is nonconvex (q(y)
being convex is handled by the nonlinear handler for convex expressions, see Section 4.6).
The quadratic nonlinear handler implements the separation of intersection cuts [109, 11,
40] for the set S := {(y, w) ∈ Rk : q(y) ≤ w} that is defined by this constraint.
Let (ŷ, ŵ) be a basic feasible LP solution violating q(y) ≤ w. First, a convex inequal-
ity g(y, w) < 0 is build that is satisfied by (ŷ, ŵ), but by no point of S. This defines
a so-called S-free set C = {(y, w) ∈ Rk+1 : g(y, w) ≤ 0}, that is, a convex set with
(ŷ, ŵ) ∈ int(C) containing no point of S in its interior. The quality of the resulting cut
highly depends on which S-free set is used. The tightest possible intersection cuts are
obtained by using maximal S-free sets as proposed by Muñoz and Serrano [79].
By using the conic relaxation K of the LP-feasible region defined by the nonbasic
variables at (ŷ, ŵ), the intersection points between the extreme rays of K and the bound-
ary of C are computed. The intersection cut is then defined by the hyperplane going
through these points and successfully separates (x̂, ŵ) and S. Adding this cut to the
LP relaxation excludes the violating point (x̂, ŵ) from the LP-feasible region and thus
enforces the quadratic constraint q(y) ≤ w. To obtain even better cuts, there is also a
strengthening procedure implemented that uses the idea of negative edge extension of
the cone K [41]. A detailed description of how the (strengthened) intersection cuts are
implemented can be found in the paper by Chmiela et al. [22].
The nonlinear handler for second-order cone (SOC) structures replaces and extends the
previous constraint handler for second-order cone constraints. It detects second-order
cone constraints in the original or extended formulation and provides separation by
means of a disaggregated cone reformulation.
44
4.4.1 Detection
Euclidian Norm If i > m, it is checked whether hlpi (x) has the form
v
u k
uX
t (aj y 2 + bj yj ) + c (24)
j
j=1
for some coefficients aj , bj , c ∈ R, aj > 0, and where yj is either an original variable (x)
or some subexpression of hlp i (·), j = 1, . . . , k, for some k ≥ 2. Rewriting (24) reveals the
constraint v
u k 2 !
uX √ bj b2j
t a j yj + √ − + c ≤ wilp . (25)
j=1
2 a j 4a j
Pk b2
If c − j=1 4ajj ≥ 0, then (25) has SOC-structure. Thus, the nonlinear handlers requests
auxiliary variables for each yj , j = 1, . . . , k, and declares that it will provide separation.
In a future version, √ any positive-semidefinite quadratic expression should be allowed for
the argument of · in (24).
qP If i ≤ m, then wilp is just a slack-variable and the constraint is equivalent to
k
j=1 (aj yj + bj yj ) + c ≤ g i . In that case, the nonlinear handler does not get ac-
2
√
tive. Assuming aj > 0 again, this will result in the extended formulation w0 ≤ g i ,
Pk
j=1 (aj wj + bj yj + c) ≤ w0 , yj ≤ wj , j = 1, . . . , k, where w0 , . . . , wk are new auxiliary
2
variables. We believe that separation for this formulation will be more efficient than
√
for (25) (constraint w0 ≤ g i is easily enforced by domain propagation).
X
k
(aj yj2 ) − ak+1 yk+1
2
+c
j=1
X
k
(aj yj2 ) − ak+1 yk+1
2
+ c ≤ wlp
i (26)
j=1
has SOC-structure if c − wlp i ≥ 0. Thus, in this case the nonlinear handler requests aux-
iliary variables for each yj , j = 1, . . . , k + 1, and declares that it will provide separation.
If i ≤ m, then the replacement of the slack variable wilp by wlp i will not be problematic
since it is sufficient to enforce the original constraint gi (x) ≤ g i (recall hlp lp
i = gi , w i = g i
lp lp
initially). However, if i > m, then relaxing wi to wi could mean that infeasibility
in (MINLP) cannot be resolved by enforcing (26). Therefore, if i > m, then the nonlinear
45
handler indicates to the constraint handler that separation should be requested from
other nonlinear handlers as well. In the current configuration, this introduces auxiliary
variables for each square term in (26) by the default nonlinear handler. The same
distinction into i ≤ m and i > m applies to the following two structures.
X
k
(aj yj2 ) − ak+1 yk+1 yk+2 + c
j=1
X
k
ak+1 ak+1
(aj yj2 ) + (yk+1 − yk+2 )2 − (yk+1 + yk+2 )2 + c ≤ wlp
i
j=1
4 4
has SOC-structure if c − wlp i ≥ 0. Thus, in this case the nonlinear handler requests aux-
iliary variables for each yj , j = 1, . . . , k + 2, and declares that it will provide separation.
X
k+1
λj (vj⊤ y + βj )2 + c
j=1
c − wlp
i ≥ 0, then an SOC-structure has been detected, the nonlinear handler requests
auxiliary variables for each yj , j = 1, . . . , k, and declares that it will provide separation.
4.4.2 Separation
The SOC constraint that has been detected before is stored in the form
v
u k
uX
t (v ⊤ y + βj )2 ≤ v ⊤ y + βk+1 (27)
j k+1
j=1
46
However, if there are many terms on the left-hand side of (27) (k being large), then
it can require many cuts to provide a tight linear relaxation of (27). Thus, as suggested
by Vielma et al. [112], a disaggregation of the cone is used if k ≥ 3:
(vj⊤ y + βj )2 ≤ zj (vk+1
⊤
y + βk+1 ), j = 1, . . . , k, (28)
X
k
⊤
zj ≤ vk+1 y + βk+1 , (29)
j=1
where variables z1 , . . . , zk are new variables that are added to SCIP and marked as
“relaxation-only”. A solution (ŷ, ẑ) that violates (27) needs to violate also (28) for some
j ∈ {1, . . . , k} or (29). The latter is already linear and can be added as cut. If a rotated
second-order cone constraint (28) is violated from some j, then it is transformed into
the standard form
q
⊤
4(vj⊤ y + βj )2 + (vk+1
⊤ y+β
k+1 − zj ) ≤ vk+1 y + βk+1 + zj
2
The bilinear nonlinear handler identifies expressions of the form y1 y2 , where y1 and y2 are
either non-binary variables of (MINLPlp ext ) or other expressions. For a product y1 y2 , the
expressions handler for products already provides linear under- and overestimators and
domain propagation that is best possible when considering the bounds [y 1 , y 1 ] × [y 2 , y 2 ]
only. The nonlinear handler, however, can exploit linear inequalities over y1 and y2 to
provide possibly tighter linear estimates and variable bounds. These inequalities are
found by projection of the LP relaxation onto variables (y1 , y2 ). For more details, see
Müller et al. [76].
Two nonlinear handlers are available that try to detect convexity or concavity of a
given expression hlp
i (x) and provide appropriate linear under- and overestimators. The
naming of the nonlinear handlers may be slightly confusing as the convex nonlinear
handler checks for concavity of hlpi (x) if overestimators are desired and the concave
nonlinear handler checks for convexity if underestimators are desired. After all, the
detection algorithms of both nonlinear handlers are similar, so that they are discussed
together here. The linear estimators are computed differently, though.
In the following only the underestimating case (⋚i being either ≤ or =) is consid-
ered. The overestimating case is handled analogously. The nonlinear handlers do not
contribute to domain propagation so far.
4.6.1 Detection
Assume the constraint handler requests that underestimators of hlp i (x) need to be found.
The convex nonlinear handler then seeks to find subexpressions of hlp i (x) that need to be
replaced by auxiliary variables wi+1 , . . . such that the remaining expression hlp
lp lp
i (x, wi+1 , . . .)
47
is convex. Similarly, the concave nonlinear handler seeks for hlp lp
i (x, wi+1 , . . .) to be con-
cave. In both cases, the detection algorithm can aim for the remaining expression to be
as large as possible. This point will be revisited later.
To construct a maximal convex subexpression of hlp i (x), the usual convexity and
concavity detection rules are inverted and applied on hlp i (x) in reverse order. To do so,
the expression is traversed in depth-first-search order, starting from the root of hlp i (x).
With each subexpression the requirement of it being convex and/or concave is associated.
For the root, this will be convexity. When a subexpression is considered, it is checked
whether the subexpression can have the required curvature. This is done by formulating
requirements on convexity/concavity on the children of the subexpression. If there are
no conditions under which a subexpression can have the required curvature, then it is
marked as to be replaced by an auxiliary variable.
p √
For an example, consider the function − exp(x) y + exp(x) with y = 0. First,
it will be checked under which conditions
p on its arguments the sum will be convex.
√
This will create the requirements “ exp(x) y must be concave”√√ and “exp(x) needs to
be convex”. Checking the former, the special structure · · (a product of two power
expressions, both exponents being 0.5) may be detected and the requirements “exp(x)
must be concave” and “y must be concave” are created. The check for “exp(x) must be
concave” fails, i.e., there are no conditions on x (other than x = x) such that exp(x) is
concave. Therefore, this appearance of exp(x) is marked for replacement by an auxiliary
variable. The check for “y must be concave” succeeds, since the function y 7→ y is both
convex and concave. The remaining check for “exp(x) needs to be convex” succeeds
under the new condition “x needs to √ be√convex”, which is satisfied. Thus, the resulting
maximal convex subexpression is − w y + exp(x), where w is a new auxiliary variable
and w ≤ exp(x) is added to the extended formulation. As this example has shown,
it is possible that several appearances of the same subexpression (exp(x)) are treated
differently, depending on what requirements are imposed on the subexpression by its
parents.
Four checks whether a subexpression can have a required curvature are currently
implemented. These are called in the given order.
48
an original variable or a subexpression. With the same methods as in the nonlinear
handler for quadratics, the sign of the eigenvalues of the quadratic coefficients matrix of
q(y) can be checked to decide whether q(y) is convex or concave. If q(y) has the desired
curvature, then it is required that every yj is linear.
The check on quadratics is currently disabled for the concave nonlinear handler. It
is not clear yet under which conditions it is beneficial to compute underestimators on a
multivariate concave q(y) via the methods of the concave nonlinear handler instead of
handling each square and bilinear term of q(y) separately.
Expression Handler For an expression f (g1 (x), . . . , gk (x)), call the CURVATURE callback
of the expression handler for f (·). If implemented and successful, then it provides
convexity or concavity requirements for each gj (x).
As has been pointed out by Tawarmalani and Sahinidis [108], a tighter linear re-
laxation of a convex set (in the sense that less cuts are required to achieve the same
outer-approximation) can usually be obtained when an extended formulation is used for
function composition. For instance, for f (g(x)) with both f (·) and g(·) being convex, f (·)
being monotonically increasing, and g(·) being nonlinear, it is beneficial to consider the
extended formulation f (w), w ≥ g(x). This is easily achieved in the detection algorithm
by changing the requirement on a subexpression from convex or concave to linear (pa-
rameter nlhdlr/convex/extendedform). Furthermore, the nonlinear handlers ignore
expressions hlp i (·) that are a sum with more than one non-constant term (parameters
nlhdlr/{convex,concave}/detectsum), unless the sum is a quadratic expression with
at least one bilinear term, for example, x2 + 2xy + y 2 .
For the concave nonlinear handler, however, the observation by Tawarmalani and
Sahinidis [108] does not apply. Instead, the number of variables in the expression for
which estimators need to be computed can be an issue. Therefore, here auxiliary vari-
ables are requested for multivariate linear subexpressions. That is, even though concavity
of log(x + y + z) can be recognized, the extended formulation log(w), w ⋚ x + y + z,
is used. This way, only one- instead of three-dimensional underestimators need to be
calculated.
Finally, if hlp lp
i (x) were transformed by the nonlinear handler into hi (x, wi+1 , . . . , wmlp )
such that the corresponding expression has only original or auxiliary variables as chil-
dren, then the detection of the nonlinear handler is reported as failed (parameter nlhdlr/
{convex,concave}/handletrivial). Instead, the default nonlinear handler will provide
linear estimates via the ESTIMATE callback of the expression handlers. We assume that
these are more efficient than the generic implementation in the convex and concave
nonlinear handlers.
49
4.6.3 Underestimators for Concave Expressions
β = β̃ − j=1 αj y j . Since the CGLP typically has more rows than columns, the dual
of CGLP is formulated and solved. To increase the chance that αy + β is a facet of the
convex envelope of f (y), the reference point is perturbed and moved into the interior of
[y, y].
At the moment, underestimators for concave functions in more than 14 variables
are not computed due to the size of the CGLP being exponential in k. In fact, the
detection algorithm in the concave nonlinear handler already returns unsuccessful if the
recognized concave expression has more than 14 variables. Dynamic row or column
generation methods could be added to overcome this limit [12].
Since the underestimator may not be tight at (ŷ, f (ŷ)), all variables are registered
as branching candidates by this nonlinear handler.
Note that the available expression handlers (see Section 4.1) do not include a handler for
quotients since they can equivalently be written using a product and a power expression.
However, the default extended formulation for an expression y1 y2−1 is given by replacing
y2−1 by a new auxiliary variable w. The linear outer-approximation is then obtained by
estimating y1 w and y2−1 separately. The quotient nonlinear handler can provide tighter
estimates by checking whether a given function hlpi (x) can be cast as
ay1 + b
f (y) = +e (30)
cy2 + d
50
4.7.1 Univariate Quotients (y1 = y2 )
If −d/c 6∈ [y 2 , y 2 ], then f (y) is either convex or concave on [y, y]. Thus, under- and
overestimators are computed via a tangent or a secant on the graph of f (y). If the
singularity is in the domain of y, then no estimator can be computed.
For forward domain propagation, observe that the minimum and maximum of f (y)
is attained at y or y if −d/c 6∈ [y 2 , y 2 ]. It is therefore sufficient for evaluate f (y) at y
and y to obtain f ([y, y]). If the singularity is in the domain of y, then no finite bounds
on f (y) can be computed.
For backward domain propagation, let [f , f ] be the bounds given for f (y). Invert-
ing (30) yields
b − d[f , f ]
y= .
c[f , f ] − a
This interval can be evaluated as in forward propagation.
This nonlinear handler creates strengthened cutting planes for constraints that depend
on semi-continuous variables. A variable xj , j ∈ N , is semi-continuous with respect to
the binary indicator variable xj ′ , j ′ ∈ I, if it is restricted to the domain [x1j , x1j ] when
xj ′ = 1 and has a fixed value x0j when xj ′ = 0. In the rest of this subsection, the
superscript 0 denotes the value of a semi-continuous variable at xj ′ = 0.
51
Consider the constraint
hlp lp lp
i (x, wi+1 , . . .) ⋚ wi (31)
and write hlp
i (·) as a sum of its nonlinear and linear parts:
hlp lp nl lp l lp
i (x, wi+1 , . . .) = hi (xnl , wnl ) + hi (xl , wl ),
lp
where hnl l
i (·) is a nonlinear function, hi (·) is a linear function, xnl and wnl are the vectors
of variables x and wlp , respectively, that appear only in the nonlinear part of hlp i , and
xl and wllp are the vectors of variables x and wlp , respectively, that appear only in the
linear part of hlpi (·).
lp
The perspective handler works on Constraint (31) if xnl and wnl are semi-continuous
with respect to the same indicator variable xj ′ , and at least one other nonlinear handler
provides estimation (ESTIMATE callback) for hlp i (·). Thus, a nonlinear handler that im-
plements only the ENFO callback, such as, for example, the SOC handler, is not suitable.
xj ≤ α(u) xj ′ + β (u) ,
xj ≥ α(ℓ) xj ′ + β (ℓ) .
If β (u) = β (ℓ) , then xj is a semi-continuous variable and x0j = β (u) , x1j = α(ℓ) + β (ℓ) , and
x1j = α(u) + β (u) .
This information can be obtained either from linear constraints in xj and xj ′ or by
finding implicit relations between xj and xj ′ . Such relations can be detected by probing,
which fixes xj ′ to its possible values and propagates all constraints in the problem, thus
detecting implications of xj ′ = 0 and xj ′ = 1. SCIP stores the implied bounds in a
globally available data structure.
The perspective nonlinear handler also detects semi-continuous auxiliary variables.
Given hlp lp lp lp
i (x, wi+1 , . . .) ⋚i wi , where x, wi+1 , . . . are semi-continuous variables depend-
ing on the same indicator xj ′ , the auxiliary variable wilp can also be assumed to be
semi-continuous since it is valid to replace ⋚i by =.
4.8.2 Separation
Suppose that the current relaxation solution violates Constraint (31). If the non-
perspective nonlinear handlers claimed that estimators of hlp i (·) depend on variable
bounds, probably because the functions is nonconvex, then probing is first performed for
lp
xj ′ = 1 in order to tighten the implied bounds on variables x, wi+1 , . . .. Linear underesti-
mators (for “≤” constraints) or overestimators (for “≥” constraints) that are valid when
xj ′ = 1 are then obtained for the tightened bounds. This estimator ℓ(·) can be separated
into parts corresponding to the nonlinear and linear variables of hlp i (·), respectively:
lp lp
ℓ(x, wi+1 , . . .) = ℓnl (xnl , wnl ) + ℓl (xl , wllp )
An extension procedure is applied to the nonlinear part to ensure it is valid and
tight for xj ′ = 0, while the linear part can remain unchanged since it shares none of the
variables with the nonlinear part:
lp lp,0 lp,0 lp
ℓnl (xnl , wnl ) + hnl nl 0 l
i (xnl , wnl ) − ℓ (xnl , wnl ) (1 − xj ′ ) + ℓ (xl , wl ).
0
52
This extension ensures that the estimator is equal to hlp lp
i (x, wi+1 , . . .) for xj ′ = 0, xnl =
0 0 lp
xnl , and wnl = wnl , and equal to ℓ(x, wi+1 , . . .) for xj ′ = 1. In the convex case, cuts
thus obtained are equivalent to the classic perspective cuts [28]. More details on the
implementation in SCIP can be found in the paper by Bestuzheva et al. [18].
A nonlinearity that appears frequently is a product between two variables and/or func-
tions. The separator for Reformulation-Linearization Technique (RLT) cuts [7, 8, 9]
for bilinear product relations in (MINLPlp ext ) and the separators discussed in the follow-
ing two sections focus on enforcing the relationship between a product of two variables
(original or auxiliary) and a corresponding auxiliary variable. The RLT separator can
additionally reveal linearized products between binary and continuous variables.
There exist variations of the RLT that can be applied to any (not necessarily quadratic)
polynomials [98]. This separator, however, deals with bilinear products only.
In the following, x refers to any variable of (MINLPlp ext ) and Xi,j refers to the auxiliary
variable (w ) that is associated with a constraint xi xj ⋚ Xi,j in (MINLPlp
lp
ext ). Note that
Xi,j may not exist in (MINLPlp ext ) for every pair of x i and x j , even when x i xj appears in
some constraint of (MINLP) (for example, auxiliary variables are not created for terms
in convex quadratic constraints). Both Xi,j and Xj,i refer to the same variable.
Given a product relation Xij = xi xj , where xi ∈ [xi , xi ], xj ∈ [xj , xj ] and a linear
constraint a⊤ x ≤ b, RLT cuts are derived by first multiplying the constraint by nonneg-
ative bound factors (xi − xi ), (xi − xi ), (xj − xj ), and (xj − xj ). For instance, consider
multiplication by the factor (xi − xi ), which yields a valid nonlinear inequality:
Bilinear product relations in which one of the multipliers is binary can equivalently be
written via mixed-integer linear constraints. Likewise, MILP constraints representing
such relations can be identified in order to derive these implicit bilinear products.
Consider two linear constraints depending on the same three variables xi , xj and xk ,
where xi is binary:
a 1 x i + b1 x k + c 1 x j ≤ d 1 , (33a)
a 2 x i + b2 x k + c 2 x j ≤ d 2 . (33b)
where the coefficients A, B, C, and D and the inequality sign are obtained by:
53
− setting xi to 1 in (33a) and (34), and requiring that the coefficients are similar for
each variable, and the constants are equal;
− setting xi to 0 in (33b) and (34), and similarly requiring equivalence;
− solving the linear system resulting from the first two steps.
SCIP analyses the linear constraints in the problem and stores all detected implicit
products. RLT cuts that use these products may strengthen the default continuous
relaxation {(xi , xj , xk ) : xi ∈ [0, 1], (33)}.
4.9.2 Separation
Let (x̂, X̂) be the solution to be separated. In order to reduce the computational cost of
RLT cut separation, SCIP takes into account the signs of coefficients of linear constraints
and signs of product relation violations. In particular, when multiplying a constraint
a⊤ x ≤ b by a bound factor, the resulting RLT cut can only be violated if ak x̂k x̂j <
ak X̂kj , that is, when replacing the product with the corresponding variable increases the
violation of the inequality. This fact is used to ignore combinations of linear constraints
and bound factors that will not produce a violated cut, thus reducing the computational
effort.
This is implemented via a row marking algorithm which, for every variable xi that
participates in bilinear products, iterates over all variables xj that appear in products
together with xi . When it encounters a violated product, the algorithm iterates over all
linear rows where xj has a nonzero coefficient and stores them in a sparse sorted array
together with the marks indicating which bound factors of xi it should be multiplied
with. The cut generation algorithm then iterates over the array of marked rows and
constructs RLT cuts from the products of each row with the suitable bound factors.
More details on the algorithms and implementation will be included in the upcoming
paper [6].
Another new separator that enforces bilinear product relations in (MINLPlp ext ) is sepa_
minor. The notation introduced in the previous section is used.
A convex relaxation of condition X = xx⊤ is given by requiring X − xx⊤ to be
positive definite. Separation for the set {(x, X) : X − xx⊤ 0} itself is possible, but
cuts are typically dense and may include variables Xij for products that do not exist
in the problem [85]. Therefore, sepa_minor considers only (principle) 2 × 2 minors of
X − xx⊤ , which also need to be positive semi-definite. By Schurs complement, this
means that the condition
1 xi xj
Aij (x, X) := xi Xii Xij 0 (35)
xj Xij Xjj
needs to hold. The separator detects principle minors for which Xii , Xjj , Xij exist and
enforces Aij (x, X) 0.
To identify which entries of the matrix X exist, the separator iterates over the avail-
able nonlinear constraints. For each constraint, its expressions are explored and all
expressions of the form x2i and xi xj are collected. Then, the separator iterates through
the found bilinear terms xi xj and if the corresponding expressions x2i and x2j exist, a
minor is detected.
54
Let (x̂, X̂) be a solution that violates (35), i.e., there exists an eigenvector v ∈ R3
of Aij (x̂, X̂) with v ⊤ Aij (x̂, X̂)v < 0. To separate (x̂, X̂), sepa_minor adds the globally
valid linear inequality v ⊤ Aij (x, X)v ≥ 0 to the separation storage of SCIP.
For circle packing instances, the minor cuts are not really helpful [55]. Since exper-
iments showed that SCIP’s overall performance was negatively affected, circle packing
constraints are identified and their bilinear terms are ignored by sepa_minor (parameter
separating/minor/ignorepackingconss).
Another new separator that enforces bilinear product relations in (MINLPlpext ) is sepa_
interminor. The notation introduced in Section 4.9 is used.
⊤ X i1 j 1 X i1 j 2
Since X = xx has rank 1 in any feasible solution, any 2 × 2 minor
X i2 j 1 X i2 j 2
of X needs to have determinant 0. That is, for any set of variable indices i1 , i2 , j1 , j2
with i1 6= i2 and j1 6= j2 , the condition
X i1 j 1 X i2 j 2 = X i1 j 2 X i2 j 1 (36)
needs to hold. If all variables in this constraint exist in the problem and the solu-
tion (x̂, X̂) that is to be separated violates (36), the separation strategy described in
Section 4.3.3 is used to add (strengthened) intersection cuts that separate (x̂, X̂). Addi-
tionally, it is also possible (parameter separating/interminor/usebounds) to use the
bounds on xi1 , xi2 , xj1 , xj2 to improve the cut by enlarging the corresponding S-free
set [22].
The separator is currently disabled by default.
The primal heuristic subnlp targets problems like (MINLP), but runs on any CIP where
the NLP relaxation is enabled. Given a point x̃ that satisfies the integrality requirements
(x̃i ∈ Z for all i ∈ I), the heuristic fixes all integer variables to the values given by x̃ in a
copy of the CIP, presolves this copy, and triggers a solution of the NLP relaxation by an
NLP solver using x̃ as starting point. If the NLP solver, such as Ipopt, finds a solution
that is feasible (and often also locally optimal) for the NLP relaxation, it is tried whether
it is also feasible for the CIP. If the CIP is a MINLP, then this should usually be the case.
The starting point x̃ can be the current solution of the LP relaxation if integer-feasible,
can be a point that a primal heuristic that searches for feasible solutions of the MILP
relaxation has computed, or can have been passed on by other primal heuristics that
look for MINLP solutions, such as undercover or mpec.
The subnlp primal heuristic, which is implemented in virtually any global MINLP
solver, had been added to SCIP together with the support for quadratic constraints
(SCIP 1.2.0). The rewrite of the algebraic expression system (Section 4.1) and the han-
dling of nonlinear constraints (Section 4.2) and the updates to the NLP solver interfaces
and NLP relaxation (Section 4.13) were a good opportunity for a thorough revision of
the heuristic.
Starting Condition and Iteration Limit By default, the heuristic is called in every node
of the branch-and-bound tree, but invoking an NLP solver whenever a starting point x̃
is available would be too costly. After the heuristic has been run, it therefore waits until
a certain number of nodes have been processed. How many nodes these are depends on
the success of the heuristic in previous calls, the number of iterations the NLP solver
55
used in previous calls, and the iteration limit that would be imposed for the following
NLP solve. Previously, the iteration limit was essentially static, which could mean that
on problems with difficult NLPs a lot of effort was wasted on NLP solves that were
interrupted by a too small iteration limit.
With SCIP 8, the heuristic tries to adapt the iteration limit to the NLPs to be
solved. For that, the heuristic counts how often an NLP solve stopped due to an
iteration limit (niterlim ) and how often it finished successfully, that is, stopped be-
cause convergence criteria were fulfilled (nokay ). Let iiterlim be the highest iteration
limit used among all NLP solves that stopped due to an iteration limit and let iokay
be the total number of iterations used in all NLP solvers that finished successfully.
Further, let imin be a minimal number of iterations that should be granted to every
NLP solve (parameter heuristics/subnlp/itermin = 20). Finally, let ninit be the
number of initial NLP solves that should be granted iinit many iterations (parame-
ters heuristics/subnlp/ninitsolves = 2 and heuristics/subnlp/iterinit = 300).
The iteration limit inext for the next NLP solve is then decided as follows:
1. If niterlim > nokay , then inext := max(imin , 2iiterlim ). That is, double the iteration
limit if more solves ran into an iteration limit than were successful.
okay
2. Otherwise, if nokay > ninit , then inext := max(imin , 2 ni okay ). That is, if there were a
few successful solves to far, then use twice the average number of iterations spend in
these solves as iteration limit.
okay
3. Otherwise, if nokay , then inext := max(imin , iinit , 2 ni okay ). That is, consider also iinit if
there had not been enough successful solves so far.
4. Otherwise, inext := max(imin , iinit ).
To decide whether to execute the heuristic, an iteration contingent icont is calculated
and checked against inext . Compared to SCIP 7, this has received only minor updates:
1. Initialize icont := 0.3(number of nodes processed + 1600) (parameters heuristics/
subnlp/{nodesfactor,nodesoffset}).
2. Weigh by previous success of heuristic: Let ntot the total number of times the heuristic
has run and nsol the number of solutions found by the heuristic. If the heuristic ran
a few times and is no longer in a phase where it tries to find a suitable iteration
limit, then weigh icont by success of heuristics. That is, if ntot − niter > ninit , then
sol
icont := nntot+1
+1 i
cont
. Parameter β :=heuristics/subnlp/successrateexp allows to
nsol +1 sol
replace ntot +1 by ( nntot+1 β
+1 ) .
3. Let itot be the total number of iterations used in all NLP solves (successful or not)
so far. Then icont := icont − itot .
4. If icont ≥ inext , then the heuristic is run with inext as iteration limit for the NLP
solver.
Presolve The heuristic triggers a solve of the NLP relaxation of SCIP in a copy of the
CIP. When the heuristic is run for a starting point x̃, integer variables are fixed to the
values given in x̃, the current primal bound is set as cutoff, and SCIP’s presolve is run
with presolve emphasis set to “fast”. The aim of the presolve is to propagate the fixing
of the integer variables in the problem since many NLP solvers, in particular those that
are interfaced by SCIP, only implement a very limited presolve. After presolve, if the
problem is not empty or infeasible, SCIP is put into a state where its NLP relaxation
can be solved. If the original CIP is a MINLP, then solutions that are feasible to this
NLP relaxation should also be feasible in the original problem. Further, also solutions
that are found during presolve are passed on to the original problem.
This process of fixing integer variables, setting a cutoff, and presolving the CIP
repeats every time the heuristic is run. If, however, there are no binary or integer
56
variables, then setting a cutoff and presolve is skipped and the copied problem is kept
in a state where its NLP relaxation can be solved.
NLP Solve The NLP relaxation in the presolved copied CIP instance is solved by a
NLP solver that is interfaced by SCIP. The solver is given x̃ as starting point and the
iteration limit is set to inext . If the NLP solver is Ipopt, then also the “expect infeasible
problem” heuristic of Ipopt is enabled.
If the solver claims to have found a feasible solution, then it is tried to add this
solution to the original problem. This can fail for three reasons: the objective function
value is not good enough, the NLP relaxation is missing some constraints of the original
CIP, or the solution is only slightly infeasible due to presolve reductions. For example,
due to tolerances, bounds of aggregated variables might be slightly violated. To work
around this case, if a solution is not accepted, its objective value is not worse than
the current primal bound, its maximal constraint violation is close to the feasibility
tolerance, and the copied problem has been presolved (I 6= ∅), then the NLP is resolved
with a tightened feasibility tolerance (parameter heuristics/subnlp/feastolfactor).
For this resolve, warmstart from the previous solution is enabled and the iteration count
of the previous NLP solve is used as iteration limit. If the NLP resolve succeeds and
produces a solution that is accepted in the original problem, then the tightened feasibility
tolerance is used for all following NLP solves by the heuristic.
4.13 NLP Relaxation and Interfaces to NLP Solvers and Automatic Differ-
entiation
The updated expressions framework (Section 4.1) triggered a revision of the NLP relax-
ation and the interfaces to NLP solvers (NLPI) and automatic differentiation (EXPRINT).
The rows of the NLP relaxation (SCIP_NLROW) no longer distinguish a quadratic part.
Therefore, rows now have the form
where the nonlinear term is given as a SCIP expression. When the nonlinear constraint
handler (see Section 4.2) creates an NLP row for a constraint g ≤ g(x) ≤ g of (MINLP),
it separates linear terms from g(x). The constraint handlers and, bounddisjunction,
knapsack, linear, linking, logicor, setppc, and varbound now add themselves to
the NLP relaxation. Previously, and, bounddisjunction, and linking constraints were
not added. For bounddisjunction, only univariate constraints are added.
Further, it is pointed out that the NLP relaxation of SCIP is no longer based on the
extended formulation (MINLPlp ext ), but is now closer to the continuous relaxation of the
original problem (MINLP).
Since expression handlers are now proper SCIP plugins that require a SCIP pointer for
many operations and since expressions are used to specify NLPs, also the NLP solver
interfaces (NLPI) are now proper SCIP plugins that require a SCIP pointer. However,
as before, the NLPs that are specified via an NLPI can be independent of the problem
that is solved by SCIP. For the expressions in the objective and constraints of such
an NLP this means that the “var” expression handler, which refers to a SCIP variable
57
(SCIP_VAR*), cannot be used. Instead, the handler for “varidx” expressions, which refer
to a variable index, needs to be used. As a consequence, the evaluation and differen-
tiation methods of expressions, which work with a SCIP solution (SCIP_SOL), are not
available (the EVAL callback of the “varidx” expression handler raises an error). Instead,
the NLP solver interfaces either implement their own evaluation and differentiation or
resort to the helper functions implemented in nlpioracle.{h,c}.
Next to the adjustments to the new expressions framework, further updates and
removals to the NLPI callbacks were implemented. For a detailed list, see the CHANGELOG.
A notable change, though, is that parameter settings that specify the working limits and
tolerances of an NLP solve are now passed directly to the NLPISOLVE callback and, thus,
are used for the corresponding solve only. The same applies to the NLP relaxation
of SCIP and SCIPsolveNLP() (now a macro). The default values for the NLP solve
parameters are now uniform among all NLP solvers and some parameters were added,
removed, or renamed. The solve statistics now include information on the violation of
constraints and variable bounds of the solution, if available.
The problem and optimization statistics that SCIP collects and prints on request
(display statistics) now include a table for each used NLP solver, which prints the
number of times the solver was used, the time spend, and how often each termination and
solution status occurred. Additionally, the time spend for evaluation and differentiation
can be shown (parameter timing/nlpieval).
As before, SCIP includes interfaces to the NLP solvers FilterSQP, Ipopt, and
WORHP. In particular the interface to Ipopt has been improved. Only some points
are mentioned here:
− Warmstarts from a primal/dual solution pair, either set via NLPISETINITIALGUESS
or by using the solution from the previous solve, are now available. Further, Ipopt
is instructed to reinitialize less datastructures if the structure of the NLP did not
change since the last solve.
− When Ipopt requests an evaluation of the Jacobian or Hessian, function reevaluation
is now skipped if possible.
− When Ipopt stops at a point that it claims to be locally infeasible, it is now checked
whether the solution proves infeasibility, see Berthold and Witzig [16, Theorem 1].
If that is not the case, the solution status is changed to “unknown”.
− A few Ipopt parameters can now be set directly via SCIP parameters (nlpi/ipopt/*).
− Due to changes in how the Ipopt output is redirected into the SCIP log, the Ipopt
banner was no longer printed reliably for the first run of Ipopt anymore. Therefore,
the banner has now been disabled completely.
For the computation of first and second derivatives, SCIP traditionally relied on a third-
party automatic differentiation (AD) library. With the new expressions framework
(Section 4.1), first derivatives and Hessian-vector products are available in SCIP itself.
Their implementation relies on the BWDIFF, FWDIFF, and BWFWDIFF callbacks of the ex-
pression handlers. The latter two are not implemented for every expression handler so
far. However, some NLP solvers make use of full Hessians and their sparsity pattern,
something that is not available in the expressions framework itself yet. Further, the
current datastructure for expressions with its many pointer-redirections does not per-
form too well when a fixed expression needs to be evaluated repeatedly in many points.
Therefore, a separate AD library is still used in the interfaces to NLP solvers.
Currently, the only library that is interfaced is CppAD5 . In the CppAD interface,
5 https://github.com/coin-or/CppAD
58
a given expression is compiled into the serial datastructure (the “tape”) that is used by
CppAD. Here, expression types (i.e., which handler is used) are checked and translated
into a form that is native to CppAD when possible. Since the CppAD interface is used
by NLPIs only, it only supports the “varidx” expression and not the “var” expression
(see begin of previous section). With SCIP 8, CppAD’s feature to optimize the tape has
been enabled.
Mapping of expression handlers to CppAD’s operator types is available for all expres-
sion handler that are included in SCIP. For some expression types, such as signpower,
this translation has been improved to avoid repeated recompilation of an expression.
For expression handlers that are not known to the CppAD interface, the backward-
and forward-differentiation callbacks of the expression handler are used to provide first
derivatives. However, second derivatives (Hessians) are not yet available. In the Ipopt
interface, the Hessian approximation will be activated in this case.
With SCIP 7, quadratic functions, including their derivatives, were treated differently
from other nonlinear function. Further, the NLPs to be solved were build from the
extended, thus sparse, formulation (MINLPlp ext ). Therefore, nonlinear functions typically
depended on only a few variables and, thus, it was usually sufficient to work with dense
Hessians. With SCIP 8, though, also the derivatives of quadratics are computed by the
AD library and the NLPs to be solved are closer to the original form (MINLP). For
these reasons, CppAD’s routines to compute sparse Hessians are used now unless more
than half of the Hessian entries are nonzero.
While Section 2.3 compared the performance of SCIP 7.0 and SCIP 8.0 on a set of
MINLP instances, this section takes a closer look on the effect of replacing only the
handling of nonlinear constraints in SCIP. That is, here the following two versions of
SCIP are compared:
classic: the main development branch of SCIP as of 23th of August 2021; in this
version, nonlinear constraints are handled as it has been in SCIP 7.0, with just a
few bugfixes added;
new: as classic, but with the handling of nonlinear constraints replaced as detailed in
this section and symmetry detection extended to handle nonlinear constraints (see
Section 3.2.2).
For this comparison, SCIP has been build with GCC 7.5.0 and uses PaPILO 1.0.2 for
MILP presolves, bliss 0.73 to find graph automorphisms, CPLEX 20.1.0.1 as LP solver,
Ipopt 3.14.4 as NLP solver, CppAD 20180000.0 for automatic differentiation, and Intel
MKL 2020.4.304 for linear algebra (LAPACK). Ipopt uses the same LAPACK and HSL
MA27 as linear solver. All runs are carried out on identical machines with Intel Xeon
CPUs E5-2660 v3 @ 2.60GHz and 128GB RAM in a single-threaded mode. As working
limits, a time limit of one hour, a memory limit of 100000MB, an absolute gap tolerance
of 10−6 , and a relative gap tolerance of 10−4 are set. All 1678 instances of MINLPLib
(version 66559cbc from 2021-03-11) that can be handled by both the classic and the
new version are used. It is noted that MINLPLib is not designed to be benchmark
set, though, since, for example, some models are overrepresented with a large number
of instance. For each instance, two additional runs where the order of variables and
constraints were permuted by SCIP were conducted. Thus, in total 5034 jobs were run
for each version of SCIP.
Table 4 summarizes the results. A run is considered as failed if the reported primal
or dual bound conflicts with best known bounds for the instance, the solver aborted
prematurely due to a fatal error (for example, failure in solving the LP relaxation of
59
Subset instances metric classic new both
all 5034 solution infeasible 481 49 20
failed 143 70 18
solved 2929 3131 2742
time limit 1962 1833 1598
memory limit 0 0 0
clean 4839 fastest 3733 3637 2531
mean time 75.9s 70.3s
mean nodes 2543 2601
[0, 3600) 2742 fastest 1990 1697 945
mean time 4.7s 5.4s
mean nodes 415 455
[10, 3600) 985 fastest 618 554 187
mean time 55.6s 66.0s
mean nodes 3960 4502
[100, 3600) 484 fastest 292 262 70
mean time 185.3s 231.9s
mean nodes 12620 17150
[1000, 3600) 141 fastest 72 81 12
mean time 803.5s 623.5s
mean nodes 43345 39014
a node), or the solver did not terminate at the time limit. For this comparison, runs
where the final solution is not feasible are accounted separately. One can observe that
with the new version, for much fewer instances the final incumbent is not feasible for
the original problem, that is, the issue discussed in Section 4.2.1 has been resolved
for nonlinear constraints. For the remaining 49 instances, typically small violations of
linear constraints or variable bounds occur. Further, the reduction in “failed” instances
by half shows that the new version is also more robust regarding the computation of
correct primal and dual bounds. Finally, we see that the new version solves about
400 additional instances than the classic one, but also does no longer solve about 200
instances within the time limit.
Subset “clean” refers to all instances where both versions did not fail, i.e., either
solved to optimality or stopped due to the time limit. We count a version to be “fastest”
on an instance if it is not more than 25% slower than the other version. Mean times
were computed as explained in the beginning of Section 2. Due to the increase in the
number of solved instances, a reduction in the mean time with the new version on subset
“clean” can be observed, even though the new version is fastest on less instances than
the classic one.
For the remaining subsets, [t1 , t2 ) refers to all instances where at least one version
ran for t1 or more seconds and both versions terminated in less than t2 seconds. That is,
only instances that could be solved to optimality by both versions are considered. For
most of these subsets, the new version is still slower more often and on average than the
classic version. Further, for a third of the instances that can be solved, both versions
perform similar. Only on the (rather small) subset [1000, 3600) of difficult-but-solvable
instances does the new version improve.
Figure 4 shows performance profiles that compare both versions w.r.t. the time to
solve an instance and the gap at termination. The time comparison visualizes what has
already been observed in Table 4: the new version solves more instances, but can be
60
Time to solve Gap at termination
5,000 5,000
4,500 4,500
4,000 4,000
3,500 3,500
#instances
3,000 3,000
2,500 2,500
2,000 2,000
1,500 1,500
1,000 1,000
classic classic
500 new 500 new
slower. The gap comparison shows that on instances that are not solved, often the new
version gives a smaller optimality gap than the classic version.
Appendix A provides detailed results on the performance of both SCIP versions on
the considered MINLPLib instances. Further, information on the usage of the nonlinear
handlers and separators that were described in this section is given.
5 SoPlex
Several other smaller changes and improvements have been made in SoPlex 6.0. First,
SoPlex was extended by a C interface, as explained in Section 7. Second, a rework of
the internal data structures was necessary to fix warnings that were issued by current
61
compiler versions. Third, the dependency on the Boost program options library has
been removed and the command line interface has been restored to its classic version.
Furthermore, it is now possible to use SoPlex rational solving mode without linking
a GMP library, using Boosts internal implementation of rational numbers. Finally, an
ongoing LP solve of SoPlex can now be interrupted from a different thread, by calling
the setInterrupt function of SoPlex. On the SCIP side, this is handled by calling
SCIPinterruptLP.
6 PaPILO
PaPILO, a C++ library, provides presolving routines for MILP and LP problems and
was introduced with SCIP Optimization Suite 7.0 [35]. PaPILO’s transaction-based
design generally allows presolvers to run in parallel without requiring expensive copies of
the problem and without special synchronization in the presolvers themselves. Instead
of applying the results immediately, presolvers return their reductions to the core, where
they are applied in a deterministic, sequential order. Modifications in the data structure
are tracked to avoid applying conflicting reductions. These conflicting reductions are
discarded.
The main new feature in PaPILO 2.0 is support for postsolving dual and basis
information, which is described in Section 6.1. This feature allows to use PaPILO as
an integrated presolving library in SoPlex, see Section 5.1. Furthermore, PaPILO 2.0
comes with several improvements to the existing code base and presolving routines,
described in Section 6.2. These changes result in a five percent improvement in the
runtime when compared to the previous release.
After removing, substituting, and aggregating variables from the original problem dur-
ing presolving, the reduced problem (and solution) does not contain any information
on missing variables. To restore the solution values of these variables and obtain a
feasible original solution, corresponding data needs to be stored during the presolving
process. The process of recalculating the original solution from the reduced one is called
postsolving or post-processing [5].
Until version 1.0.2, PaPILO supported only postsolving primal solutions. In the
latest version, PaPILO supports postsolving also for the dual solutions, reduced costs,
the slack variables of the constraints, and the basic status of the variables and con-
straints for the presolvers: DominatedColumns, Dualfix, ParallelCols, ParallelRows,
Propagation, FixContinuous, ColSingleton, and SingletonStuffing. These form
the majority of the LP presolvers. The remaining presolvers are either only active in the
presence of integer variables6 or need to be disabled by the user7 .
Furthermore, in dual postsolve mode PaPILO only applies variable bound tight-
enings when they fix a variable. Otherwise, the solution to the reduced problem may
correspond to a non-vertex solution in the original space and simple postsolving without
an expensive crossover may not be possible. If the basic information is irrelevant for the
user, the variable tightening without fixing can be turned on by setting the parameter
calculate_basis_for_dual to false. An exception here is if a variable is unbounded.
In this case, the bound of this variable is set to a finite value, which is slightly worse than
6 Presolvers only active for MILP: CoefficientStrenghtening, ImpliedInt, Probing,SimpleProbing,
SimplifyInequalities
7 LP-Presolvers not supporting dual postsolving: DualInfer, SimpleSubstitution,
Substitution, Sparsify, ComponentDetection, LinearDependency, see also settings file
lp_presolvers_with_basis.set in the PaPILO repository
62
the best possible bound so that the bound can not be tight in the reduced problem. This
applies only to instances with no integer variables. Variable tightening is still performed
for mixed-integer programs.
For primal postsolving only information about removed, substituted and aggregated
variables needs to be tracked. By contrast, dual postsolving needs to be informed about
every modification found during presolving. PaPILO 2.0 keeps tracks of these changes
and saves them in the postsolve stack analogously to primal postsolving. For example,
a row-bound change can lead to changes in the dual solution due to complementary
slackness.
After postsolving, PaPILO checks if the original solution passes the primal and dual
feasibility checks and fulfills the Karush-Kuhn-Tucker conditions [58] for LP. The result
of the checks is logged to the console. Since also infeasible solutions can be postsolved,
PaPILO does not abort if the checks fail and instead returns the result to the calling
method.
For debugging purposes, this check can be performed after every step in the postsolve
process. To activate this debugging feature, PaPILO needs to be built in debug mode
and the parameter validation_after_every_postsolving_step has to be turned on.
This may be expensive because the problem at the current stage needs to be calculated
from the original problem by applying all reductions until this point.
The introduction of dual postsolving allows using PaPILO as presolving library in
SoPlex. Section 5.1 contains a brief description of the integration.
In this section we describe several smaller improvements in PaPILO 2.0. These changes
affect mostly only the performance of PaPILO and rarely change the resulting reduced
problem. All in all, these changes improve performance of PaPILO since the last release
by about five percent performance (using 16 threads) in terms of runtime and number
of presolving rounds, see Table 5 for details.
− When PaPILO 1.0 is run with only one thread, the presolvers are executed in sequen-
tial order, but the reductions of every presolver are only applied at the end of the
presolving round. This is part of the parallel design of PaPILO and helps to guar-
antee deterministic results independently of the number of threads used. However,
in sequential mode this does not guarantee best performance.
Instead, when PaPILO 2.0 is run with only one thread, the reductions are applied be-
fore the next presolver, so that the next presolver can work on the modified problem.
This feature can be turned off by setting the parameter presolve.apply_results_-
immediately_if_run_sequentially to false.
− DualFix handles an additional case with two conditions: first, the objective value
of the variable is zero; second, if the variable has only up-/down-locks, the lower-
/upperbound is (negative) infinity. Then, the variable can be set to infinity and
deleted from the model. PaPILO removes the variable and marks all constraints
containing the variable as redundant. In postsolving, the variable is set to the max-
imum/minimum value such that the variable bounds and the constraints in which
it appeared in the original problem are not violated and hence, the solution stays
feasible.
− PaPILO uses a transaction-based design to allow parallelization within the pre-
solvers. This may generate conflicts when applying the reductions of the presolvers
to the core. Conflicting reductions need to be discarded since it can not be ensured
that the reduction is still valid. Conflicts make additional runs necessary to check
if the discarded or a reformulated reduction can still be applied. Therefore, we per-
formed a detailed analysis of the most prominent conflict relationships, introduced a
63
Table 5: Performance comparison for PaPILO on MIPLIB 2017 benchmark
new reduction type in PaPILO 2.0, and rearranged the order of presolving reductions
reductions. In more detail, the improvements are as follows:
· ParallelRowDetection could generate unnecessary conflicts mainly for Parallel-
ColDetection. To avoid these reductions and additional runs, two new reduction
types RHS_LESS_RESTRICTIVE and LHS_LESS_RESTRICTIVE were introduced. In
contrast to RHS and LHS, the columns of a row are not marked as modified, if the
initial bound was (negative) infinity.
· ParallelRowDetection, ParallelColDetection and, DominatedCol could gen-
erate internal conflicts, if multiple rows/columns were parallel or dominating each
other. To avoid these conflicts, bunches of parallel and dominating columns/rows
are handled separately.
· The order in which the reductions are applied to the core impacts the number
of conflicts between the presolvers. We analyzed the conflicts between the pre-
solvers and implement a new default order that minimizes the conflicts between
the presolvers.
The positive impact of these changes can be observed in the reduced number of
rounds reported in Table 5.
− SimpleSubstitution handles an additional case to detect infeasibility faster.
− The loops in which the presolvers ConstraintPropagation, DualFix, Simplify-
Inequality, CoefficientStrengthening, SimpleSubstitution, SimpleProbing,
and ImpliedInteger scan the rows or columns of the problems were parallelized.
Hence, these presolvers can distribute their workload on different threads and exploit
multiple threads internally.
Finally, two further features were introduced, improving transparency as well as the
ability to debug:
− For analysis and debug purposes, PaPILO can now log every transaction in the order
they were applied to the problem if verbosity level kDetailed is specified.
− PaPILO provides an additional way to validate its correctness. A feasible debug
solution can be passed via the command-line parameter -b. After presolving the
corresponding instance, PaPILO checks if the debug solution is still contained in
the reduced problem and if the reduced solution can be postsolved to the same
solution passed via command-line. It is recommended to turn off presolvers that use
duality reasoning to (correctly) cut off optimal solutions.
7 Interfaces
SCIP is available via interfaces to several programming languages. These interfaces
allow users to programmatically call SCIP with an API close to the C one or leverage a
higher-level syntax. The following interfaces are available:
64
− The Python interface PySCIPOpt, which can now also be installed as a Conda pack-
age;
− The AMPL interface that comes as part of the main SCIP library and executable;
− The Julia package SCIP.jl;
− C wrapper for SoPlex;
− A Matlab interface.
We highlight below the main changes and development on interfaces to SCIP.
7.1 AMPL
The AMPL interface of SCIP has been rewritten and moved from being a separate
project (interfaces/ampl) to being a part of the main SCIP library and executable
(src/scip). The interface consists of a reader for .nl files as they are generated by
AMPL and a specific AMPL-mode for the SCIP executable.
The .nl reader now relies on ampl/mp8 instead of the AMPL solver library (ASL) to
read .nl files. Required source files of ampl/mp are redistributed with SCIP. Therefore,
building the .nl reader and the AMPL interface is enabled by default. The .nl reader
supports linear and nonlinear objective functions and constraints, continuous, binary,
and integer variables, and special-ordered sets. More than one objective function is
not supported by the interface. A nonlinear objective function is reformulated into
a constraint. In nonlinear functions, next to addition, subtraction, multiplication, and
division, operators for power, logarithm, exponentiation, sine, cosine, and absolute value
are supported. Variable and constraint flags (initial, separate, propagate, and others)
can be set via AMPL suffixes.
If the SCIP executable is called with -AMPL as second argument, it expects the
name of a .nl file (with .nl extension excluded) as first argument. In this mode,
a SCIP instance is created, a settings file scip.set is read, if present, the .nl file
is read, the problem is solved, an AMPL solution file (.sol) is written, and SCIP
exists. Two additional parameters are available in the AMPL mode: boolean parameter
display/statistics allows to enable printing the SCIP statistics after the solve; string
parameter display/logfile allows to specify the name of file to write the SCIP log to.
If the problem is an LP, SCIP presolve has not run, and the LP was solved, then a dual
solution is written to the solution file, too.
7.2 Julia
The Julia package SCIP.jl has been in development since SCIP 3 with several improve-
ments since SCIP 7. It contains a lower-level interface matching the SCIP public C
API and a higher-level interface based on MathOptInterface.jl (MOI)[59]. The lower-
level interface is automatically generated with Clang.jl to match the public SCIP
C API, allowing for the direct conversion of C programs using SCIP into Julia ones.
MathOptInterface.jl is a uniform interface for constrained structured optimization
in Julia. Solvers specify the types of constraints they support and implement those
only. Users can use the common interface for multiple solvers across different classes
of problems including LP, MILP, (mixed-integer) conic optimization problems. Higher-
level modeling languages such as JuMP.jl are implemented on top of MOI, allowing
practitioners to define their optimization model in a syntax close to the mathematical
specification and solve it through SCIP or swap solvers in a single line.
8 https://github.com/ampl/mp
65
The SCIP.jl package can also automatically download the appropriate compiled
binaries for SCIP and some of its dependencies on some platforms. This removes the
need for users to download and compile SCIP separately. Custom SCIP binaries can still
be passed to the Julia package when building it. This integration was made possible by
cross-compiling SCIP through the BinaryBuilder.jl infrastructure, creating binaries
for multiple combinations of OS, architecture, C runtime, and compiler. The binaries
are available through GitHub and versioned for other platforms to use outside of Julia.
With SCIP 8, there also comes a C wrapper for SoPlex. Since in some environments it is
much easier to interface with C code than it is with C++ (which is the language SoPlex
is written in), this wrapper paves the way for other projects to use SoPlex as a standalone
LP solver and not only through SCIP. By building a pure C, simple shared library and
header file, it is now possible to easily call SoPlex through the foreign function interface
from many other languages.
7.4 Matlab
In the past, two interfaces from Matlab to SCIP existed. SCIP came with a rudimentary
Matlab interface and there was the OPTI Toolbox by Jonathan Currie, available at
https://github.com/jonathancurrie/OPTI. However, the development of the OPTI
Toolbox stopped. In order to retain the advantages of this interface, a new interface was
based on it. This new interface is available through the Git repository
https://github.com/scipopt/MatlabSCIPInterface.
from Matlab or Octave. Then the interface will be build and you are possibly asked
where to find the SCIP or SCIP-SDP installation (you can also supply this information
through the environment variables SCIPDIR/SCIPOPTDIR or SCIPSDPDIR).
To highlight the advantages of this interface, we briefly show an example. To solve
the NLP
min2 {(x1 − 1)2 + (x2 − 1)2 : 0 ≤ x1 , x2 ≤ 2},
x∈R
66
lb = [0.0; 2.0];
ub = [0.0; 2.0];
x0 = [0.0, 0.0];
Opt = opti('obj',obj,'lb',lb,'ub',ub)
[x,fval,exitflag,info] = solve(Opt,x0)
More examples are available in the repository.
8 ZIMPL
Zimpl 3.5.0 is released with the release of SCIP Optimization Suite 8.0. Zimpl now
allows also nonlinear objective functions. There has been quite some work to increase
the code quality further by augmenting the code with compiler attributes. Also the
work has been started to completely switch to C99 declarations, i.e, move the variable
declarations from the start of the functions further inside, use local loop variables, const
as much as possible, and in general try to initialize variables once they are created.
Getting the maximum out of gcc, clang, clang-analyzer, and pclint is an interesting
experiment, which however clearly shows that C was never mend to be validated.
A major additional feature in Zimpl is the ability to write out suitable instances
as a Quadratic Unconstrained Binary Optimization (QUBO) problem. Unfortunately,
there is no standard format for QUBO files yet, so we support several small varieties of a
sparse format for the moment. Furthermore, we have started to implement the ability to
automatically convert constraints into quadratic binary objective functions [42]. With
this release it is just a first experimental and limited ability, but we plan to extend this
continuously with the next releases.
9 The UG Framework
UG is a generic framework for parallelizing branch-and-bound based solvers in a dis-
tributed or shared memory computing environment. It was designed to parallelize
powerful state-of-the-art branch-and-bound based solvers (we call these “base solvers”.
Originally, the base solver is a branch-and-bound based solver, but in this release, it
is redefined as any solver that is being parallelized by UG) externally in order to ex-
ploit their powerful performance. UG has been developed over 10 years as beta versions
to have general interfaces for the base solvers. Internally, we have developed parallel
solvers for SCIP [100, 103, 101], CPLEX (not developed anymore), FICO Xpress [102],
PIPS-SBB [77, 78], Concorde9 , and QapNB [29]. In addition to the parallelization of
these branch-and-bound base solvers, UG was used to develop MAP-SVP [105], which
is a solver for the Shortest Vector Problem (SVP), and whose algorithm does not rely
on branch-and-bound. Developers of several solvers parallelized by UG needed to in-
ternally modify the UG framework itself, since UG could not handle the base solvers
directly. Especially, a success of MAP-SVP, which updated several records of the SVP
challenge10 , motivated us to develop generalized UG, in which all solvers developed so
far can be handled by a single unified framework. The generalized UG is included in
this version of SCIP Optimization Suite as UG version 1.0.
UG version 1.0 is completely different from the previous versions internally, though
its interfaces for branch-and-bound base solvers remain the same as far as possible.
Figure 5 shows the class hierarchy of UG version 1.0. The original UG base classes are
separated into branch-and-bound related codes and the others, so that non-branch-and-
bound solvers can be parallelized naturally. In the original UG, the ParaSolver class,
9 https://www.math.uwaterloo.ca/tsp/concorde.html
10 http://latticechallenge.org/svp-challenge
67
Figure 5: Class hierarchy and source code directory organization of the UG
version 1.0
which wraps the “base solver”, and the ParaComm class, which warps communication
codes or parallelization libraries, are abstracted. On top of these abstractions, in UG
version 1.0, the ParaLoadCoordinator class, which is a controller of the parallel solver,
and the ParaParamSet class, which defines the parameter set, are also abstracted so
that a “base solver” specific parallel algorithm can be implemented flexibly with the
“base solver” specific parameters. The flexibility of UG version 1.0 can be observed in
the paper of CMAP-LAP (Configurable Massively Parallel solver framework for LAttice
Problems) [106], which is another parallel solver framework for lattice problems. On top
of CMAP-LAP, CMAP-DeepBKZ [107] has been developed, which is the successor of
MAP-SVP and the first application of the generalized UG.
For the UG version 1.0, proper documentation of software is started, and Doxygen
style documentation is introduced. Moreover, a CMake build system is included. In this
opportunity, we made the following several modifications in FiberSCIP architecture and
added a selfsplit ramp-up feature to FiberSCIP and ParaSCIP.
The ramp-up is a process which runs until all CPU cores become busy. For a gen-
eral discussion of the ramp-up process for parallel branch-and-bound, see Ralphs et
al. [86] and for the ramp-up process of FiberSCIP see Shinano et al. [103]. One of the
distinguishing features of FiberSCIP is racing ramp-up. FiberSCIP is composed of a
ParaLoadCoordinator thread and several ParaSolver threads. During the ramp-up
phase, all ParaSolver threads solve the same root node with different parameter settings
until certain termination criteria is met, that is, FiberSCIP generates multiple search
trees in parallel and selects the winner within the ParaSolver threads, and afterward
the winner search tree is solved in parallel. FiberSCIP is composed of an LC thread and
several ParaSolver threads.
In previous versions, the ParaSolver threads were detached. The reason why the
ParaSolver threads were detached is to enable terminating FiberSICP as soon as pos-
sible when one of the racing ParaSolver threads has solved the instance. When we
developed FiberSCIP for the first time, it was very hard to interrupt SCIP when an LP
68
solve is being executed. Therefore, all ParaSolver threads were detached, and the main
thread exits when one of the ParaSolver threads has solved the instance. However,
this mechanism leads to some instability in FiberSCIP. The latest version of SCIP can
interrupt solving appropriately while LP is running. In this version of FiberSCIP, all
ParaSolver threads are joined and FiberSCIP is terminated cleanly.
In the previous versions of FiberSCIP, when a time limit is specified in the parame-
ter, FiberSCIP created a ParaTimeLimitMonitor thread to create the time limit no-
tification message to the ParaLoadCoordinator. The thread sleeps until the time
limit, wakes up when time limit is reached, and sends the notification message to the
ParaLoadCoordinator. The ParaLoadCoordinator tries to interrupt all ParaSolver
threads. However, these interruptions within a reasonable time could be failed when LP
is running within SCIP, since it did not have a chance to receive the message. From
the performance point of view, creating the ParaTimeLimitMonitor is not good, but
ParaLoadCoordinator works as a kind of event-driven controller, and then an event to
notify the time limit was needed.
With UG version 1.0, this mechanism was changed to set a time limit when each
SCIP solves a sub-MIP and hopes to detect the time limit on the ParaSolver side.
How it works well depends on how accurately SCIP can terminate for the time limit
setting. Unfortunately, it has several irregular timings and FiberSCIP needs to handle
such cases currently. However, from the performance point of view, it has a benefit of
running FiberSCIP without the ParaTimeLimitMonitor thread.
To make FiberSCIP stable, one of the features needed is handling the memory limit, since
memory overuse makes FiberSCIP abort. However, memory usage estimation is very
hard for FiberSCIP. By using SCIP functions for memory usage estimation (plus Linux
69
system feature of memory usage, in case of FiberSCIP runs on Linux), the FiberSCIP
memory usage estimation feature is implemented. When the estimation is more than
the system memory, the latest version of FiberSCIP terminates with ”memory limit
reached”.
70
partial decompositions, many functions of Seeedpool were moved. Formerly, Seeedpool
provided the functionality to classify constraints and variables. With version 3.5, vari-
able and constraint classifiers are implemented as plugins as well. All classifiers are called
at the beginning of the detection process. Each classifier can produce partitions of con-
straints or variables, which are represented by objects of the classes ConsPartition
(formerly ConsClassifier) and VarPartition (formerly VarClassifier), respectively.
Both classes implement the abstract class IndexPartition (formerly IndexClassifier).
This modular design allows users to easily add classifiers.
Furthermore, many parameters were changed. For a complete overview we refer to
the CHANGELOG. Users can enable or disable the entire detection process using the param-
eter detection/enabled. Moreover, the parameter detection/postprocess enables or
disables the post-processing of decompositions. Parameters related to classification were
moved to detection/classification/. By default, GCG 3.5 preprocesses an instance,
runs the detection, then solves. If detection should run already on the un-preprocessed
model, it must be initiated manually before presolve starts; by default, a second round
of detection is then still performed on the preprocessed model. With this more intuitive
behavior, the parameters origenabled of detectors and classifiers were removed.
The legacy detection mode (for detectors prior to version 3.0) is no longer avail-
able. All corresponding parameters and legacy detectors were removed. Users have to
implement detectors using the new callbacks and API.
In GCG, two general branching rules are implemented (branching on original vari-
ables [114] and Vanderbeck’s generic branching [111]) as well as one rule that applies
only to set partitioning master problems (Ryan and Foster branching [92]). While these
rules differ quite significantly (creating two child nodes vs. several child nodes; branching
on variables vs. on constraints), the general procedure at a node comes in two common
stages: First, one determines the set of candidates we could possibly branch on (called
the branching rule here). Second, the branching candidate selection heuristic then actu-
ally selects one of the available candidates. In SCIP the latter is done by ranking the
candidates according to a score. Both the branching rule and the selection heuristic can
have a significant impact on the size of the branch-and-bound tree, and hence on the
runtime of the entire algorithm. GCG previously contained only pseudo cost, most frac-
tional, and random branching as selection heuristics for original variable branching, and
first-index branching for Ryan-Foster and Vanderbeck’s generic branching. In GCG 3.5,
several new selection heuristics are added, all of which are based on strong branching.
For an overview of which selection heuristics are available for which branching rules see
Table 6. In the following, we briefly describe the new selection heuristics. For more
detailed descriptions, we refer to [36].
Given a set of branching candidates, the selection heuristic usually creates a ranking and
selects a winner. One ranking criterion is the expected gain, that is, the improvement in
dual bound in the child nodes when compared to the current node. However, computing
the exact gains amounts to performing all full strong branching. In a branch-and-price
context this means evaluating all branching candidates by solving all child node LP
relaxations with column generation to optimality. With often hard pricing problems,
this variant is an even (much) larger computational burden than it is in the standard
branch-and-cut context. Yet, strong branching has demonstrated potential in branch-
and-price for hard instances [82, 91]. In particular, strong branching generally creates
small trees (compared to other branching rules).
71
Table 6: Branching candidate selection heuristics available in GCG 3.5.
branching rule
selection heuristic original Ryan-Foster Vanderbeck
random/index-based
✓ ✓ ✓
branching
most fractional/infeasible
✓ ✓2
branching
pseudocost branching ✓ ✓2
strong branching
✓ ✓
with column generation1
strong branching
✓ ✓
without column generation1
hybrid branching1 ✓ ✓3
reliability branching1 ✓ ✓3
hierarchical branching1 ✓ ✓3
1 The strong branching based heuristics can be combined. 2 GCG can only aggregate the respective
scores of the (two) individual variables. 3 These heuristics originally use both strong and pseudocost
branching; however, pseudocost branching can also be substituted by any other heuristic, with varying
performance.
72
pute (such as pseudocost, most fractional, or random branching), then
− in phase 1, the remaining candidates are filtered based on their SBw/oCG scores,
and finally
− in phase 2, a candidate is selected out of the remaining candidates based on the score
the candidates received from SBw/CG.
The effort at a given node depends on the assumed importance of evaluating the node
precisely (a larger estimated size of the subtree gives more importance to a candidate),
and on the difference in computational effort vs. quality of predictions between the
phases. The intuition here is that only the most promising candidates (based on the
scores from the earlier heuristics) should receive the largest evaluation effort (and best
evaluation quality).
In the same way that strong branching can be combined with pseudocost branch-
ing to obtain hybrid strong/pseudocost branching and reliability branching, we can also
combine hierarchical strong branching with hybrid strong/pseudocost branching and
reliability branching to obtain hybrid hierarchical strong/pseudocost branching and hier-
archical reliability branching [36]: for hierarchical reliability branching, we only perform
strong branching in phase 1 and 2 on candidates that are not yet reliable, with different
thresholds for phase 1 and phase 2. For hybrid hierarchical strong/pseudocost branching,
we only perform phase 0 starting from a given depth, and phases 1 and 2 up to a given
depth (again separate thresholds for each phase).
Strong branching is implemented using SCIP’s probing mode. The columns gen-
erated when evaluating a node using SBw/CG are kept in GCG’s column pool. The
(potentially positive side) effect of this needs still to be evaluated.
By default, like in SCIP, strong branching is disabled. It can be enabled for original
variable branching or Ryan-Foster branching by setting branching/orig/usestrong or
branching/ryanfoster/usestrong, respectively, to TRUE. By default, this performs a
hybrid hierarchical branching. Furthermore, there are several parameters that allow to
completely change the behavior of the heuristics. In fact, all of the strong branching
heuristics use the same implementation, just with different parameter settings. Preset
settings files for each of the previously described selection heuristics as well as a tem-
plate file can be found in GCG’s settings folder. Table 7 lists most of the available
parameters. Further information can be found in the paper by Gaul [36] and in GCG’s
documentation.
73
Table 7: Parameters for strong branching.
Parameter Effect
branching/[orig,ryanfoster]/…
minphase[0,1]outcands minimum number of output candidates from phase [0,1]
maxphase[0,1]outcands maximum number of output candidates from phase [0,1]
maxphase[0,1]outcandsfrac maximum number of output candidates from phase 0 as fraction of total
candidates, takes precedence over minphase[0,1]outcands
phase[1,2]gapweight how much influence the nodegap has on the number of output candidates
from phase [1,2]−1
branching/bpstrong/…
histweight fraction of candidates in phase 0 that are chosen based on historical
strong branching performance
mincolgencands minimum number of candidates for phase 2 to be performed, otherwise
the best previous candidate will be chosen
maxsblpiters, upper bound on number of simplex iterations/pricing rounds, sets upper
maxsbpricerounds bound to twice the average if set to 0
immediateinf if set to TRUE, candidates with infeasible children are selected immediately
reevalage reevaluation age
maxlookahead upper bound for the look ahead
lookaheadscales by how much the look ahead scales with the overall evaluation effort
(currently lookaheadscales * maxlookahead is the minimum look ahead)
closepercentage fraction of the chosen candidate’s phase 2 score the phase 0 heuristic’s
choice needs to have in order to be considered close
maxconsecheurclose number of times in a row the phase 0 heuristic needs to be close for
strong branching to be stopped entirely
minphase0depth, κ+ +
1 = λdepth + ρdepth · logλbase (ncands ), where κi is the depth un-
1
maxphase[1,2]depth, 1
til which phase i is performed, λdepth = maxphase1depth, ρdepth =
depthlogweight, depthlogweight, λbase = depthlogbase and ncands is number of vari-
depthlogbase, ables that we could branch on (usually all integer and binary vari-
depthlogphase[0,2]frac ables). κ+ +
2 = κ1 · depthlogphase2frac, but at most maxphase2depth.
The minimum depth from which on phase 0 is performed is equal to
κ+1 · depthlogphase0frac, but at least minphase0depth
phase[1,2]reliable min count of pseudocost scores for a variable to be considered reliable in
phase [1,2]
74
the subproblems is very costly, and processing a node is more expensive than in stan-
dard branch-and-bound; thus there is an ever stronger incentive to have a smaller tree
in branch-and-price. Certainly, this needs more investigation.
With GCG 3.5 we introduce PyGCGOpt which extends SCIP’s existing Python inter-
face [66] for GCG. It is implemented in Cython (cython.org) and is distributed as a
package independent from the optimization suite under github.com/scipopt/PyGCGOpt.
All the existing functionality for the modeling of MILPs is inherited from PySCIPOpt.
As a result, any MILP modeled in Python can also be solved with GCG without addi-
tional effort. This lowers the technical hurdle to try out a branch-and-price approach for
any existing problem. In its first incarnation, the interface supports specifying custom
decompositions and exploration of automatically detected decompositions. They can be
visualized directly within Jupyter notebooks. In addition, GCG plugins for detectors
and pricing solvers can be implemented in Python.
In the following code listing, the capacitated p-median problem (CPMP) is modeled
with PySCIPOpt’s expression syntax. The specified textbook decomposition [20] is
solved by GCG with Dantzig-Wolfe reformulation upon the call to m.optimize(). Note
that the automatic structure detection functionality of GCG remains intact, so that the
user does not need to (but can) specify a decomposition.
1 from pygcgopt import gcgModel , quicksum as qs
2
3 n_locs = 5
4 n_clusters = 2
5 distances = {0: {0: 0, 1: 6, 2: 54, 3: 52, 4: 19}, 1: {0: 6, 1: 0, 2: 28, 3:
75, 4: 61}, 2: {0: 54, 1: 28, 2: 0, 3: 91, 4: 40}, 3: {0: 52, 1: 75, 2: 91,
3: 0, 4: 28}, 4: {0: 19, 1: 61, 2: 40, 3: 28, 4: 0}}
6 demands = {0: 14, 1: 13, 2: 9, 3: 15, 4: 6}
7 capacities = {0: 39, 1: 39, 2: 39, 3: 39, 4: 39}
8
9 m = gcgModel ()
10 x = {(i, j): m.addVar(f"x_{i}_{j}", vtype="B", obj=distances[i][j]) for i in
range(n_locs) for j in range(n_locs)}
11 y = {j: m.addVar(f"y_{j}", vtype="B") for j in range(n_locs)}
12
13 conss_assignment = m.addConss(
14 [qs(x[i, j] for j in range(n_locs)) == 1 for i in range(n_locs)])
15 conss_capacity = m.addConss(
16 [qs(demands[i] * x[i, j] for i in range(n_locs)) <= capacities[j] * y[j] for
j in range(n_locs)])
17 cons_pmedian = m.addCons(qs(y[j] for j in range(n_locs)) == n_clusters)
18
19 master_conss = conss_assignment + [cons_pmedian]
20 block_conss = [[ cons] for cons in conss_capacity]
21 m. addDecompositionFromConss (master_conss , *block_conss)
22
23 m.optimize ()
The Python interface required refactoring within the codebase of GCG. Before, a
lot of core functionality of the solver was implemented within dialog handlers. This
made it hard to use GCG as a library in external programs. The functions gcg-
transformProb(), gcgpresolve(), gcgdetect(), gcgsolve(), gcggetDualbound(),
gcggetPrimalbound(), and gcggetGap() were added to the public interface and are
called from the dialog handlers as well as the Python interface. As a side effect, GCG
can now be used better as a C/C++ shared library.
75
geo. mean arith. mean
time (relative to highest entry)
0.8
0.6
0.4
0.2
20
# instances
10
0.8
0.6
0.4
0.2
76
10.4 Visualization Suite
Visualizations of algorithmic behavior can yield understanding and intuition for inter-
esting parts of a solving process. With GCG 3.5, we include a visualization suite that
offers different visualization scripts to show processes and results related to detection,
branching, or pricing, among others. These scripts are written in Python 3 and included
in the folder stats and use the .out, .res and .vbc files generated when executing
make test STATISTICS=true (possible additional requirements are given in the docu-
mentation). Furthermore, the suite also allows for two additional ways of accessing the
visualization scripts:
1. Reporting functionality: With two different scripts, callable via make visu, users can
easily generate reports similar to the decomposition report that was already available
in GCG 3.0, which offers an overview over all decompositions that GCG found during
its detection process. The generated documents include all visualizations offered by
the suite along with descriptions of them in the captions. While the testset report
shows information about a single run of one selected testset, the comparison report
also compares two or more runs. Examples of both reports can now be found in the
GCG website documentation, see Section 10.5.
2. Jupyter notebook: Since the scripts themselves already require a working installation
of Python 3, we now added a visualization notebook with which one can read data
(sample data provided in the GCG website documentation), clean, and filter it inter-
actively, and visualize the results afterwards. The scripts of the visualization suite
are imported and returned plots can be shown, exported, and even further edited.
Just like GCG should facilitate experimenting with a decomposition approach with-
out having to implement it, the visualization suite should facilitate producing and pre-
senting computational results and algorithmic behavior. Also this is an ongoing long
term effort.
The online documentation of GCG was lagging behind the progress made with the code
itself. As part of this release, we offer a user-group targeted website documentation.
It enables users to make themselves familiar with GCG by means of very accessible
feature descriptions for functionality such as the explore menu or the visualization suite
and by a set of use cases to follow and reproduce. For developers, we now include a
guide explaining the peculiarities of the interplay between GCG and SCIP (“Getting
Started: Developer’s Edition”). Within the “Developer’s Guide”, descriptions of existing
code and algorithmics such as detection, branching, and pricing are provided to allow
developers to familiarize themselves with them, if required. Updates to the “How to
use” (for instance, conducting experiments) and “How to add” (for instance, adding
branching rules) sections completes the documentation.
77
Bs02 Settings: default SCIP Status: optimal solution found
Pricer has found at least one variable Variables were taken from column pool (ID −1) Pricer has found at least one variable in stab. round
End of Root
End of initial Farkas Pricing
40
35
30
Pricing Problem ID
25
20
15
10
1 28 55 76 83 90 95 115 142 169 196 223 250 277 304 331 358 385 412 0 10
Pricing Round % of found variables
Figure 7: Bubble plot visualizing how the pricing problems performed during
GCG’s Branch-and-Price process. This visualization was automatically gener-
ated using the new comparison report functionality.
78
11 SCIP-SDP
SCIP-SDP is a framework for solving mixed-integer semidefinite programs of the fol-
lowing form
inf b⊤ y
Xm
s.t. Ak yk − A0 0,
k=1
(37)
ℓ i ≤ yi ≤ u i ∀ i ∈ [m],
yi ∈ Z ∀ i ∈ I,
with symmetric matrices A ∈ R
k n×n
for i ∈ {0, . . . , m}, b ∈ Rm , ℓi ∈ R ∪ {−∞},
ui ∈ R ∪ {∞} for all i ∈ [m] := {1, . . . , m}. The set of indices of integer variables is
given by I ⊆ [m] and M 0 denotes that a matrix M is positive semidefinite.
SCIP-SDP was initiated by Sonja Mars and Lars Schewe, see Mars [73], and then
continued by Gally et al. [31] and Gally [30]. It features interfaces to the SDP-solvers
DSDP, Mosek, and SDPA. In the following, we briefly report on the changes since the
last version 3.2.0.
SCIP-SDP 4.0 contains about 50 000 lines of C-code. Since the last version most
of these lines have been touched. In particular, the interface to the SDP-solvers has
been completely revised. One benefit is that the memory footprint of SCIP-SDP is
now smaller for large instances. Moreover, many bugs have been fixed.
Two important parameter changes that impact performance are:
− By default the number of used threads is 1 (it was previously set to “automatic”).
This change speeds-up the solution process by about 40 % for most smaller to medium
sized SDPs.
− The feasibility and optimality tolerances have been set to 10−5 . The exception is
Mosek for which is is set to 10−6 , because this leads to more reliable results.
Further changes in a nutshell are the following:
− If the SDP-relaxation only has a single variable, it is solved using a semismooth
Newton method. This slightly speeds up solution times and significantly decreases
the times for heuristics. In particular, this holds for rounding heuristics on instances
in which all integral variables are fixed except a single continuous variable. This
continuous variable is often used for expressing the objective function, for example
in cardinality least squares problems.
− The LP-rows that are added to the LP-relaxation are strengthened using standard
LP-preprocessing routines (coefficient tightening).
− A new heuristic heur_fracround has been added which iteratively rounds integer
variables based on their fractional values in the last SDP-relaxation. In between, it
performs propagation and solves a final SDP if unfixed continuous variables remain.
This heuristic helps to significantly improve the overall running times. Propagation is
now also used by the heuristic heur_sdprand to improve its success rate for instances
with additional linear constraints. Furthermore, both heuristics are correcting nearly
integral values of integral variables in order to avoid small rounding errors, which
might add up to significant amounts.
− Several new presolving techniques have been introduced, which are discussed and
evaluated in detail by Matter and Pfetsch [74]. This includes two propagation meth-
ods to fix variables based on 2 × 2-minors and the upper bounds of other variables.
− SCIP-SDP also allows to use LP-solving instead of SDP-relaxations using the pa-
rameter misc/solvesdps. It then generates so-called eigenvector cuts. The behavior
of these cuts has been changed as follows. One can now add eigenvector cuts for all
79
negative eigenvalues of the current infeasible relaxation. Moreover, SDP-relaxations
can be solved in enforcing, that is, after all integer variables have integral values.
Furthermore, the cuts can be sparsified.
− The display of SCIP-SDP now changes depending on whether SDPs or LPs are
solved for the relaxations. Moreover, the default settings are redefined for solving
SDPs.
− One can also generate a second-order cone relaxation, but so far this has not shown
a run time improvement.
− The readers for the SDPA and CBF formats have been completely revised (and
rewritten for SDPA). They are now much faster and produce more warnings if errors
occur.
− SCIP-SDP can also handle rank-1 constraints, that is, the requirement that the
resulting matrix has rank 1. This is achieved by adding quadratic constraints for
2 × 2-minors. Rank-1 constraints regularly appear in the literature, but are usually
very hard to solve. The handling of these constraints has been revised.
− The locking information (capturing whether the matrices Ak are positive/negative
semidefinite) is now copied to sub-SCIPs.
− The statistics for solving SDP-relaxations has been extended and now reports more
details.
− There is a new file scipsdpdef.h that contains defines for the SCIP-SDP version.
This enables code to depend on different SCIP-SDP versions.
− It is now possible to add SDP-constraints within the solving process.
− SCIP-SDP can now run concurrently, for example, by writing concurrentopt in
the command line if SCIP and SCIP-SDP are compiled using the TPI.
− The updated Matlab Interface presented in Section 7 also allows to use SCIP-SDP.
Before we present some computational results, let us add some words of caution.
Although SCIP-SDP is numerically quite robust, accurately solving SDPs is more de-
manding than solving LPs. This can lead to wrong results on some instances11 and the
results often depend on the chosen tolerances. Technical reasons are that the SDPs are
solved using interior point solvers, which produce solutions with more “numerical noise”
(since they do not have nonbasic variables). Moreover, the solvers use relative tolerances,
while SCIP-SDP uses absolute tolerances. Finally, for Mosek, we use a slightly tighter
tolerance than in SCIP-SDP.
Table 8 shows a comparison between SCIP-SDP 3.2 and 4.0 on the same testset
as used by Gally et al. [31], which consists of 194 instances. Reported are the number
of optimally solved instances, as well as the shifted geometric means of the number of
processed nodes and the CPU time in seconds. We use Mosek 9.2.40 for solving the
continuous SDP-relaxations. The tests were performed on a Linux cluster with 3.5 GHz
Intel Xeon E5-1620 Quad-Core CPUs, having 32 GB main memory and 10 MB cache.
All computations were run single-threaded and with a timelimit of one hour.
As can be seen from the results, SCIP-SDP 4.0 is significantly faster than SCIP-
SDP 3.2, but we recall that we have relaxed the the tolerances (see above). Nevertheless,
the conclusion is that SCIP-SDP 4.0 has significantly improved since the last version.
80
Table 8: Performance comparison of SCIP-SDP 4.0 vs. SCIP-SDP 3.2
P
S = (V (S), E(S)) ⊆ G such that T ⊆ V (S) holds and e∈E(S) c(e) is minimized. The
SPG is a fundamental N P-hard problem [53], and one of the most studied problems
in combinatorial optimization. Moreover, many related problems have been extensively
described in the literature and can be found in a wide range of practical applications [63].
Since version 3.2, the SCIP Optimization Suite has contained SCIP-Jack, an exact
solver not only for the SPG but also for 11 related problems. This release of the SCIP
Optimization Suite contains the new SCIP-Jack 2.012 , which can handle two additional
problem classes: the maximum-weight connected subgraph problem with budgets, and
the partial-terminal Steiner tree problem. Furthermore, SCIP-Jack 2.0 comes with
major improvements on almost all problem classes it can handle. Most importantly, the
latest SCIP-Jack outperforms the well-known SPG solver by Polzin and Vahdati [83,
110] on almost all nontrivial benchmark testsets from the literature. See the preprint [87]
for more details. Notably, Polzin and Vahdati [83, 110] had remained out of reach for
almost 20 years for any other SPG solver.
The large number of newly implemented algorithms (and data structures) also results
in an increase of the SCIP-Jack code base by a factor of almost 3: To roughly 110 000
lines of code. Additionally, the implementation of many existing methods has been
improved. We list several of the most important new features.
For SPG, a central new feature is a distance concept that provably dominates the
well-known bottleneck Steiner distance from Duin and Volgenant [25], see Rehfeldt and
Koch [89] for details. This distance concept is used in several (new) reduction methods
implemented in SCIP-Jack 2.0. Also, the new SCIP-Jack includes a full-fledged
implementation of so-called extended reduction techniques. These methods are provably
stronger than the state-of-the-art implementation [84], and also yield strong practical
results, see Rehfeldt and Koch [87] for details. Furthermore, decomposition methods for
the SPG have been implemented, for example to exploit the existence of biconnected
components in the underlying graph. Also, dynamic programming algorithms have been
implemented to efficiently solve subproblems with special structures that sometimes arise
after decomposition.
The improvements for SPG also have an immediate impact on problems that are
transformed to SPG within SCIP-Jack, such as the group Steiner tree problem. Even
for the Euclidean Steiner tree problem, large improvements are possible (the SPG can be
used for full Steiner tree concatenation after the discretization of the problem): SCIP-
Jack 2.0 is able to solve 19 Euclidean Steiner tree problems with up to 100 000 terminals
for the first time to optimality, see Rehfeldt and Koch [87]. Notably, the state-of-the-art
Euclidean Steiner tree solver GeoSteiner 5.1 [48] could not solve any of these instances
even after one week of computation. In contrast, SCIP-Jack 2.0 solves all of them
within 12 minutes, some even within two minutes.
Considerable problem-specific improvements have also been made for the prize-
collecting Steiner tree problem and (to a lesser extent) for the maximum-weight connect
subgraph problem. For details on the improvements for the prize-collecting Steiner tree
problem see Rehfeldt and Koch [88], for the maximum-weight connect subgraph prob-
lem see Rehfeldt, Franz, and Koch [90]. The improvements encompass primal and dual
heuristics as well as reduction techniques. As a result, SCIP-Jack 2.0 can solve many
previously unsolved benchmark instances from both problem classes to optimality—
12 see also http://scipjack.zib.de
81
the largest of these instances have up to 10 million edges. Additionally, for the prize-
collecting Steiner tree problem SCIP-Jack 2.0 can solve most benchmark sets from the
literature more than two times faster than its predecessor with respect to the shifted
geometric mean (with a shift of 1 second).
13 Final Remarks
The SCIP Optimization Suite 8.0 release provides new functionality and improved per-
formance and reliability. In SCIP, new symmetry handling features were added, includ-
ing the handling of symmetries of general integer and continuous variables, improving
detection routines and adding a strategy for symmetry handling routine selection. Mix-
ing cutting planes were implemented, which considerably improve the performance on
chance constrained programs. A decomposition primal heuristic was updated to further
improve the found solutions, and a new decomposition primal heuristic was added. A
new cut strengthening procedure was added to the Benders decomposition framework,
and a new type of plugin for cut selection was introduced.
With this release also comes a thorough revision of how nonlinear constraints are
handled in SCIP, in particular how extended formulations are created. In the new
version, the original formulation is preserved and the extended formulation is used for
relaxations only, which drastically improves the reliability of solutions. High- and low-
level nonlinear structures are now handled by plugins of different types in order to
avoid expression type ambiguity. Simultaneously, a number of new MINLP features
were introduced such as various new cutting planes, symmetry detection for nonlinear
constraints, support for sine and cosine functions, and others.
Regarding usability, the Julia package SCIP.jl was improved in several aspects and
a new MATLAB interface to SCIP was implemented. UG was generalized to enable
the parallelization of all solvers via a unified framework, without the need to modify the
framework for each solver; its internal structure has been completely reworked. The new
version of GCG includes new algorithmic features and substantial ecosystem improve-
ments, such as extended interfaces, improved documentation, added utility for running
and analyzing computational experiments. PaPILO features a new postsolving func-
tionality for dual solutions when applied to pure LPs. The handling of relaxations in
SCIP-SDP was revised, and new heuristics and presolving methods were added. The
new version of SCIP-Jack can handle two additional problem classes and comes with
major performance improvements.
These developments yield a considerable performance improvement on nonconvex
MINLP instances and reduce the overall number of numerical failures on the MINLP
testset, although a slowdown is observed on convex instances due to a lack of recognition
of a structure present in one instance group. Nonetheless, we observe an overall runtime
reduction of about 21%. A substantial speed-up is observed on MILP instances with
about a 50% shorter runtime on the most challenging instances, as well as special problem
classes such as SDP and Steiner tree problems and their variants.
Acknowledgements
The authors want to thank all previous developers and contributors to the SCIP Op-
timization Suite and all users that reported bugs and often also helped reproducing
and fixing the bugs. In particular, thanks go to Suresh Bolusani, Didier Chételat, Gre-
gor Hendel, Andreas Schmitt, Helena Völker, Robert Schwarz, Matthias Miltenberger,
Matthias Walter, and Antoine Prouvoust and the Ecole team. The Matlab-SCIP(-SDP)
interface was set up with the big help of Nicolai Simon.
82
Contributions of the Authors
The material presented in the article is highly related to code and software. In the
following we try to make the corresponding contributions of the authors and possible
contact points more transparent.
JvD, CH, and MP are responsible for the changes of the symmetry handling rou-
tines (Section 3.2). The extension of symmetry handling to nonlinear constraints (Sec-
tion 3.2.2) is by FW. WC and MP implemented the mixing cut separator (Section 3.3).
The update of PADM and the new DPS heuristic (Section 3.4) are due to KH and DW. SJM
is responsible for the updates to the Benders’ decomposition framework (Section 3.5).
MT and FeS implemented the new cut selector plugin (Section 3.6). Various technical
improvements (Section 3.7) were added by MP and SV. The new expressions framework
(Section 4.1) is by BM, FeS, FW, KB, and SV. The rewritten handler for nonlinear con-
straints (Section 4.2) is by BM, FeS, KB, and SV. The nonlinear handler for quadratic
expressions (Section 4.3) and the separator sepa_interminor (Section 4.11) are by AC
and FeS. The nonlinear handler for second-order cones (Section 4.4) is by BM, FeS,
and FW. The nonlinear handler for bilinear expressions (Section 4.5) and the separator
sepa_minor (Section 4.10) are by BM. The nonlinear handler for convex and concave
expressions (Section 4.6) are by BM, KB, and SV. The nonlinear handler for quotients
(Section 4.7) is by BM and FW. The nonlinear handler for perspective reformulations
(Section 4.8) is by KB. The separator for RLT cuts (Section 4.9) is by FW and KB.
The separator for principal minors of X xx⊤ (Section 4.10) is by BM and FW. The
separator for intersection cuts on the rank-1 constraint for a matrix (Section 4.11) is
by AC and FeS. The revised primal heuristic subnlp (Section 4.12) and the updates to
NLP, NLPI, and AD interfaces (Section 4.13) are by SV.
The changes to SoPlex (Section 5) are due to LE and AH. AH and AG are responsi-
ble for the new dual postsolving functionality in PaPILO (Section 6). The new AMPL
interface of SCIP (Section 7.1) was implemented by SV. The Julia interface SCIP.jl
(Section 7.2) was extended and updated by MB, Erik Tadewaldt, Robert Schwarz, and
Yupei Qi. The SoPlex C interface (Section 7.3) was developed by AC, LE, and MB.
Nicolai Simon and MP updated the Matlab-interface (Section 7.4). The work on ZIMPL
(Section 8) was done by TK. The updates to the UG framework (Section 9) are by YS.
Concerning GCG (Section 10), EM refactored the detector loop; the website documen-
tation and visualization suite is due to TD; OG created the strong branching code; and
SS implemented PyGCGOpt. FM and MP implemented the changes in SCIP-SDP
(Section 11). DR is responsible for SCIP-Jack (Section 12).
The work by FrS on the continuous integration system, regular test and benchmark
runs, binary distributions, websites, and many more has been invaluable for all develop-
ments.
References
[1] A. Abdi and R. Fukasawa. On the mixing set with a knapsack constraint. Mathematical
Programming, 157:191–217, 2016. doi:10.1007/s10107-016-0979-5.
[2] T. Achterberg. Constraint Integer Programming. PhD thesis, Technische Universität
Berlin, 2007.
[3] T. Achterberg. SCIP: Solving Constraint Integer Programs. Mathematical Programming
Computation, 1(1):1–41, 2009. doi:10.1007/s12532-008-0001-1.
[4] T. Achterberg, T. Koch, and A. Martin. Branching rules revisited. Operations Research
Letters, 33(1):42–54, 2005. doi:10.1016/j.orl.2004.04.002.
[5] T. Achterberg, R. E. Bixby, Z. Gu, E. Rothberg, and D. Weninger. Presolve reductions
in mixed integer programming. INFORMS Journal on Computing, 32(2):473–506, 2020.
doi:10.1287/ijoc.2018.0857.
83
[6] T. Achterberg, K. Bestuzheva, and A. Gleixner. Efficient separation of RLT cuts for
implicit and explicit bilinear products, in preparation.
[7] W. P. Adams and H. D. Sherali. A tight linearization and an algorithm for zero-
one quadratic programming problems. Management Science, 32(10):1274–1290, 1986.
doi:10.1287/mnsc.32.10.1274.
[8] W. P. Adams and H. D. Sherali. Linearization strategies for a class of zero-one
mixed integer programming problems. Operations Research, 38(2):217–226, 1990.
doi:10.1287/opre.38.2.217.
[9] W. P. Adams and H. D. Sherali. Mixed-integer bilinear programming problems. Mathe-
matical Programming, 59(1):279–305, 1993. doi:10.1007/BF01581249.
[10] A. Atamtürk, G. L. Nemhauser, and M. W. Savelsbergh. The mixed vertex packing
problem. Mathematical Programming, 89:35–53, 2000. doi:10.1007/s101070000154.
[11] E. Balas. Intersection cuts – a new type of cutting planes for integer programming.
Operations Research, 19(1):19–39, 1971. doi:10.1287/opre.19.1.19.
[12] X. Bao, N. V. Sahinidis, and M. Tawarmalani. Multiterm polyhedral relaxations for
nonconvex, quadratically-constrained quadratic programs. Optimization Methods and
Software, 24(4-5):485–504, 2009. doi:10.1080/10556780902883184.
[13] P. Belotti, J. Lee, L. Liberti, F. Margot, and A. Wächter. Branching and bounds tight-
ening techniques for non-convex MINLP. Optimization Methods and Software, 24(4-5):
597–634, 2009. doi:10.1080/10556780903087124.
[14] P. Belotti, C. Kirches, S. Leyffer, J. Linderoth, J. Luedtke, and A. Mahajan. Mixed-integer
nonlinear optimization. Acta Numerica, 22:1–131, 2013. doi:10.1017/S0962492913000032.
[15] P. Bendotti, P. Fouilhoux, and C. Rottner. Orbitopal fixing for the full (sub-)orbitope and
application to the unit commitment problem. Mathematical Programming, 186:337–372,
2021. doi:10.1007/s10107-019-01457-1.
[16] T. Berthold and J. Witzig. Conflict analysis for MINLP. INFORMS Journal on Com-
puting, 33(2):421–435, 2021. doi:10.1287/ijoc.2020.1050.
[17] T. Berthold, S. Heinz, and M. E. Pfetsch. Nonlinear pseudo-boolean optimization: re-
laxation or propagation? In O. Kullmann, editor, Theory and Applications of Satis-
fiability Testing – SAT 2009, number 5584 in LNCS, pages 441–446. Springer, 2009.
doi:10.1007/978-3-642-02777-2_40.
[18] K. Bestuzheva, A. Gleixner, and S. Vigerske. A computational study of perspective cuts.
ZIB Report 21-07, Zuse Institute Berlin, 2021.
[19] R. Borndörfer, C. E. Ferreira, and A. Martin. Decomposing matrices into blocks. SIAM
Journal on Optimization, 9(1):236–269, 1998. doi:10.1137/S1052623497318682.
[20] A. Ceselli and G. Righini. A branch-and-price algorithm for the capacitated p-median
problem. Networks, 45(3):125–142, 2005. doi:10.1002/net.20059.
[21] J.-S. Chen and C.-H. Huang. A note on convexity of two signomial functions. Journal of
Nonlinear and Convex Analysis, 10(3):429–435, 2009.
[22] A. Chmiela, G. Muñoz, and F. Serrano. On the implementation and strengthening of inter-
section cuts for QCQPs. In M. Singh and D. P. Williamson, editors, Integer Programming
and Combinatorial Optimization, pages 134–147. Springer, 2021. doi:10.1007/978-3-030-
73879-2_10.
[23] S. S. Dey and M. Molinaro. Theoretical challenges towards cutting-plane selection. Math-
ematical Programming, 170(1):237–266, 2018. doi:10.1007/s10107-018-1302-4.
[24] F. Domes and A. Neumaier. Constraint propagation on quadratic constraints. Constraints,
15(3):404–429, 2010. doi:10.1007/s10601-009-9076-1.
[25] C. Duin and A. Volgenant. An edge elimination test for the Steiner problem in graphs.
Operations Research Letters, 8(2):79–83, 1989. doi:10.1016/0167-6377(89)90005-9.
[26] M. A. Duran and I. E. Grossmann. An outer-approximation algorithm for a class of
mixed-integer nonlinear programs. Mathematical Programming, 36(3):307–339, 1986.
doi:10.1007/BF02592064.
[27] M. Fischetti, M. Monaci, and D. Salvagnin. Selfsplit parallelization for mixed-
integer linear programming. Computers and Operations Research, 93:101–112, 2018.
doi:10.1016/j.cor.2018.01.011.
84
[28] A. Frangioni and C. Gentile. Perspective cuts for a class of convex 0–1 mixed integer
programs. Mathematical Programming, 106(2):225–236, 2006. doi:10.1007/s10107-005-
0594-3.
[29] K. Fujii, N. Ito, S. Kim, M. Kojima, Y. Shinano, and K.-C. Toh. Solving challenging
large scale QAPs. ZIB-Report 21-02, Zuse Institute Berlin, 2021.
[30] T. Gally. Computational Mixed-Integer Semidefinite Programming. PhD thesis, TU
Darmstadt, 2019.
[31] T. Gally, M. E. Pfetsch, and S. Ulbrich. A framework for solving mixed-integer
semidefinite programs. Optimization Methods and Software, 33(3):594–632, 2018.
doi:10.1080/10556788.2017.1322081.
[32] G. Gamrath and M. E. Lübbecke. Experiments with a generic Dantzig-Wolfe decompo-
sition for integer programs. In P. Festa, editor, Experimental Algorithms, volume 6049
of Lecture Notes in Computer Science, pages 239–252. Springer Berlin Heidelberg, 2010.
doi:10.1007/978-3-642-13193-6_21.
[33] G. Gamrath, T. Fischer, T. Gally, A. M. Gleixner, G. Hendel, T. Koch, S. J. Ma-
her, M. Miltenberger, B. Müller, M. E. Pfetsch, C. Puchert, D. Rehfeldt, S. Schenker,
R. Schwarz, F. Serrano, Y. Shinano, S. Vigerske, D. Weninger, M. Winkler, J. T. Witt,
and J. Witzig. The SCIP Optimization Suite 3.2. Technical report, Optimization Online,
2016. URL http://www.optimization-online.org/DB_HTML/2016/03/5360.html.
[34] G. Gamrath, T. Koch, S. J. Maher, D. Rehfeldt, and Y. Shinano. SCIP-Jack—a solver
for STP and variants with parallelization extensions. Mathematical Programming Com-
putation, 9(2):231–296, 2017. doi:10.1007/s12532-016-0114-x.
[35] G. Gamrath, D. Anderson, K. Bestuzheva, W.-K. Chen, L. Eifler, M. Gasse, P. Ge-
mander, A. Gleixner, L. Gottwald, K. Halbig, G. Hendel, C. Hojny, T. Koch, P. L.
Bodic, S. J. Maher, F. Matter, M. Miltenberger, E. Mühmer, B. Müller, M. E. Pfetsch,
F. Schlösser, F. Serrano, Y. Shinano, C. Tawfik, S. Vigerske, F. Wegscheider, D. Weninger,
and J. Witzig. The SCIP Optimization Suite 7.0. Technical report, Optimization Online,
2020. http://www.optimization-online.org/DB_HTML/2020/03/7705.html.
[36] O. Gaul. Hierarchical strong branching and other strong branching-based branching candi-
date selection heuristics in branch-and-price. Master’s thesis, RWTH Aachen University,
2021.
[37] B. Geißler, A. Morsi, L. Schewe, and M. Schmidt. Penalty Alternating Direction Methods
for Mixed-Integer Optimization: A New View on Feasibility Pumps. SIAM Journal on
Optimization, 27(3):1611–1636, 2017. doi:10.1137/16M1069687.
[38] A. Gleixner, M. Bastubbe, L. Eifler, T. Gally, G. Gamrath, R. L. Gottwald, G. Hendel,
C. Hojny, T. Koch, M. E. Lübbecke, S. J. Maher, M. Miltenberger, B. Müller, M. E.
Pfetsch, C. Puchert, D. Rehfeldt, F. Schlösser, C. Schubert, F. Serrano, Y. Shinano,
J. M. Viernickel, M. Walter, F. Wegscheider, J. T. Witt, and J. Witzig. The SCIP
Optimization Suite 6.0. Technical report, Optimization Online, 2018. URL http://www.
optimization-online.org/DB_HTML/2018/07/6692.html.
[39] A. Gleixner, G. Hendel, G. Gamrath, T. Achterberg, M. Bastubbe, T. Berthold,
P. Christophel, K. Jarck, T. Koch, J. Linderoth, M. Lübbecke, H. Mittelmann, D. Ozyurt,
T. Ralphs, D. Salvagnin, and Y. Shinano. MIPLIB 2017: Data-driven compilation of the
6th Mixed-Integer Programming Library. Mathematical Programming Computation, 13:
443–490, 2021. doi:10.1007/s12532-020-00194-3.
[40] F. Glover. Convexity cuts and cut search. Operations Research, 21(1):123–134, 1973.
doi:10.1287/opre.21.1.123.
[41] F. Glover. Polyhedral convexity cuts and negative edge extensions. Zeitschrift für Oper-
ations Research, 18:181–186, 1974. doi:10.1007/BF02026599.
[42] F. Glover, G. Kochenberger, and Y. Du. Quantum bridge analytics I: a tutorial on
formulating and using QUBO models. 4OR, 17(4):335–371, 2019. doi:10.1007/s10288-
019-00424-y.
[43] O. Günlük and Y. Pochet. Mixing mixed-integer inequalities. Mathematical Programming,
90:429–457, 2001. doi:10.1007/PL00011430.
[44] P. Hansen, B. Jaumard, M. Ruiz, and J. Xiong. Global minimization of in-
definite quadratic functions subject to box constraints. Naval Research Lo-
85
gistics (NRL), 40(3):373–392, 1993. doi:10.1002/1520-6750(199304)40:3<373::AID-
NAV3220400307>3.0.CO;2-A.
[45] D. Henrich. Initialization of parallel branch-and-bound algorithms. In H. Kitano, V. Ku-
mar, and C. B. Suttner, editors, Parallel Processing for Artificial Intelligence, volume 15 of
Machine Intelligence and Pattern Recognition, chapter 11, pages 131–144. North-Holland,
1994. doi:10.1016/B978-0-444-81837-9.50015-4.
[46] C. Hojny. Packing, partitioning, and covering symresacks. Discrete Applied Mathematics,
283:689–717, 2020. doi:10.1016/j.dam.2020.03.002.
[47] C. Hojny and M. E. Pfetsch. Polytopes associated with symmetry handling. Mathematical
Programming, 175(1):197–240, 2019. doi:10.1007/s10107-018-1239-7.
[48] D. Juhl, D. M. Warme, P. Winter, and M. Zachariasen. The GeoSteiner software package
for computing Steiner trees in the plane: an updated computational study. Mathematical
Programming Computation, 10(4):487–532, 2018. doi:10.1007/s12532-018-0135-8.
[49] T. Junttila and P. Kaski. bliss: A tool for computing automorphism groups and canonical
labelings of graphs. http://www.tcs.hut.fi/Software/bliss/, 2012.
[50] V. Kaibel and A. Loos. Finding descriptions of polytopes via extended formulations and
liftings. In A. R. Mahjoub, editor, Progress in Combinatorial Optimization. Wiley, 2011.
[51] V. Kaibel and M. E. Pfetsch. Packing and partitioning orbitopes. Mathematical Program-
ming, 114(1):1–36, 2008. doi:10.1007/s10107-006-0081-5.
[52] V. Kaibel, M. Peinhardt, and M. E. Pfetsch. Orbitopal fixing. Discrete Optimization, 8
(4):595–610, 2011. doi:10.1016/j.disopt.2011.07.001.
[53] R. Karp. Reducibility among combinatorial problems. In R. Miller and J. Thatcher,
editors, Complexity of Computer Computations, pages 85–103. Plenum Press, 1972.
doi:10.1007/978-1-4684-2001-2_9.
[54] J. E. Kelley. The cutting-plane method for solving convex programs. Journal of the Society
for Industrial and Applied Mathematics, 8(4):703–712, 1960. doi:10.1137/0108053.
[55] A. Khajavirad. Packing circles in a square: a theoretical comparison of various con-
vexification techniques. Technical report, Optimization Online, 2017. http://www.
optimization-online.org/DB_HTML/2017/03/5911.html.
[56] T. Koch. Rapid Mathematical Prototyping. PhD thesis, Technische Universität Berlin,
2004.
[57] S. Küçükyavuz. On mixing sets arising in chance-constrained programming. Mathematical
Programming, 132:31–56, 2012. doi:10.1007/s10107-010-0385-3.
[58] H. W. Kuhn and A. W. Tucker. Nonlinear programming. In Proceedings of the Second
Berkeley Symposium on Mathematical Statistics and Probability, 1950, pages 481–492,
Berkeley and Los Angeles, 1951. University of California Press.
[59] B. Legat, O. Dowson, J. Garcia, and M. Lubin. MathOptInterface: a data structure
for mathematical optimization problems. INFORMS Journal on Computing, in press.
doi:10.1287/ijoc.2021.1067.
[60] L. Liberti. Reformulations in mathematical programming: automatic symmetry detection
and exploitation. Mathematical Programming, 131(1):273–304, 2012. doi:10.1007/s10107-
010-0351-0.
[61] L. Liberti and J. Ostrowski. Stabilizer-based symmetry breaking constraints for mathe-
matical programs. Journal of Global Optimization, 60:183–194, 2014. doi:10.1007/s10898-
013-0106-6.
[62] LINDO. API Users Manual, 2003. URL http://www.lindo.com.
[63] I. Ljubić. Solving Steiner trees: Recent advances, challenges, and perspectives. Networks,
77(2):177–204, 2021. doi:10.1002/net.22005.
[64] J. Luedtke, S. Ahmed, and G. L. Nemhauser. An integer programming approach for
linear programs with probabilistic constraints. Mathematical Programming, 122:247–272,
2010. doi:10.1007/s10107-008-0247-4.
[65] A. Mahajan and T. Munson. Exploiting second-order cone structure for global optimiza-
tion. Technical Report ANL/MCS-P1801-1010, Argonne National Laboratory, 2010. URL
http://www.optimization-online.org/DB_HTML/2010/10/2780.html.
[66] S. Maher, M. Miltenberger, J. P. Pedroso, D. Rehfeldt, R. Schwarz, and F. Serrano.
PySCIPOpt: Mathematical programming in Python with the SCIP optimization suite.
86
In Mathematical Software – ICMS 2016, pages 301–307. Springer International Publishing,
2016. doi:10.1007/978-3-319-42432-3_37.
[67] S. J. Maher, T. Fischer, T. Gally, G. Gamrath, A. Gleixner, R. L. Gottwald, G. Hen-
del, T. Koch, M. E. Lübbecke, M. Miltenberger, B. Müller, M. E. Pfetsch, C. Puchert,
D. Rehfeldt, S. Schenker, R. Schwarz, F. Serrano, Y. Shinano, D. Weninger, J. T. Witt,
and J. Witzig. The SCIP Optimization Suite 4.0. Technical report, Optimization Online,
2017. URL http://www.optimization-online.org/DB_HTML/2017/03/5895.html.
[68] C. D. Maranas and C. A. Floudas. Finding all solutions of nonlinearly con-
strained systems of equations. Journal of Global Optimization, 7(2):143–182, 1995.
doi:10.1007/BF01097059.
[69] H. Marchand and L. A. Wolsey. Aggregation and mixed integer rounding to solve MIPs.
Operations Research, 49(3):363–371, 2001. doi:10.1287/opre.49.3.363.11211.
[70] F. Margot. Pruning by isomorphism in branch-and-cut. Mathematical Programming, 94
(1):71–90, 2002. doi:10.1007/s10107-002-0358-2.
[71] F. Margot. Exploiting orbits in symmetric ILP. Mathematical Programming, 98(1–3):
3–21, 2003. doi:10.1007/s10107-003-0394-6.
[72] F. Margot. Symmetry in integer linear programming. In M. Jünger, T. M. Liebling,
D. Naddef, G. L. Nemhauser, W. R. Pulleyblank, G. Reinelt, G. Rinaldi, and L. A. Wolsey,
editors, 50 Years of Integer Programming, pages 647–686. Springer, 2010. doi:10.1007/978-
3-540-68279-0_17.
[73] S. Mars. Mixed-Integer Semidefinite Programming with an Application to Truss Topology
Design. PhD thesis, FAU Erlangen-Nürnberg, 2013.
[74] F. Matter and M. E. Pfetsch. Presolving for mixed-integer semidefinite optimization.
Technical report, Optimization Online, 2021. URL http://www.optimization-online.
org/DB_HTML/2021/10/8614.html.
[75] G. P. McCormick. Computability of global solutions to factorable nonconvex programs:
Part I – convex underestimating problems. Mathematical Programming, 10(1):147–175,
1976. doi:10.1007/BF01580665.
[76] B. Müller, F. Serrano, and A. Gleixner. Using Two-Dimensional Projections for Stronger
Separation and Propagation of Bilinear Terms. SIAM Journal on Optimization, 30(2):
1339–1365, 2020. doi:10.1137/19m1249825.
[77] L.-M. Munguía, G. Oxberry, and D. Rajan. PIPS-SBB: A parallel distributed-memory
branch-and-bound algorithm for stochastic mixed-integer programs. In 2016 IEEE In-
ternational Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages
730–739, 2016. doi:10.1109/IPDPSW.2016.159.
[78] L.-M. Munguía, G. Oxberry, D. Rajan, and Y. Shinano. Parallel PIPS-SBB: multi-level
parallelism for stochastic mixed-integer programs. Computational Optimization and Ap-
plications, 73(2):575–601, 2019. doi:10.1007/s10589-019-00074-0.
[79] G. Muñoz and F. Serrano. Maximal quadratic-free sets. In D. Bienstock and G. Zambelli,
editors, Integer Programming and Combinatorial Optimization, pages 307–321. Springer,
2020. doi:10.1007/978-3-030-45771-6_24.
[80] J. Ostrowski, J. Linderoth, F. Rossi, and S. Smriglio. Orbital branching. Mathematical
Programming, 126(1):147–178, 2011. doi:10.1007/s10107-009-0273-x.
[81] papilo. Papilo: Parallel presolve for integer and linear optimization. https://www.github.
com/scipopt/papilo, 2021.
[82] D. Pecin, A. Pessoa, M. Poggi, and E. Uchoa. Improved branch-cut-and-price for capaci-
tated vehicle routing. In J. Lee and J. Vygen, editors, Integer Programming and Combi-
natorial Optimization, pages 393–403. Springer, 2014. doi:10.1007/978-3-319-07557-0_33.
[83] T. Polzin. Algorithms for the Steiner problem in networks. PhD thesis, Saarland Univer-
sity, 2003.
[84] T. Polzin and S. V. Daneshmand. Extending Reduction Techniques for the Steiner Tree
Problem, pages 795–807. Springer, 2002. doi:10.1007/3-540-45749-6_69.
[85] A. Qualizza, P. Belotti, and F. Margot. Linear programming relaxations of quadratically
constrained quadratic programs. In J. Lee and S. Leyffer, editors, Mixed Integer Nonlinear
Programming, pages 407–426. Springer, 2012. doi:10.1007/978-1-4614-1927-3_14.
87
[86] T. Ralphs, Y. Shinano, T. Berthold, and T. Koch. Parallel solvers for mixed integer
linear optimization. In Y. Hamadi and L. Sais, editors, Handbook of parallel constraint
reasoning, pages 283–336. Springer, 2018. doi:10.1007/978-3-319-63516-3_8.
[87] D. Rehfeldt and T. Koch. Implications, conflicts, and reductions for Steiner trees. ZIB-
Report 20-28, Zuse Institute Berlin, 2020.
[88] D. Rehfeldt and T. Koch. On the exact solution of prize-collecting Steiner tree problems.
INFORMS Journal on Computing, 2021. doi:10.1287/ijoc.2021.1087.
[89] D. Rehfeldt and T. Koch. Implications, conflicts, and reductions for steiner trees. In
M. Singh and D. P. Williamson, editors, Proceedings: Integer Programming and Com-
binatorial Optimization – 22nd International Conference, IPCO 2021, volume 12707 of
LNCS, pages 473–487. Springer, 2021. doi:10.1007/978-3-030-73879-2_33.
[90] D. Rehfeldt, H. Franz, and T. Koch. Optimal connected subgraphs: Formulations and
algorithms. ZIB-Report 20-23, Zuse Institute Berlin, 2020.
[91] S. Røpke. Branching decisions in branch-and-cut-and-price algorithms for vehicle routing
problems, 2012. Presentation in Column Generation.
[92] D. M. Ryan and B. A. Foster. An integer programming approach to scheduling. In
A. Wren, editor, Computer Scheduling of Public Transport Urban Passenger Vehicle and
Crew Scheduling, pages 269–280. North Holland, Amsterdam, 1981.
[93] D. Salvagnin. A dominance procedure for integer programming. Master’s thesis, Universtà
degli studi di Padova, 2005.
[94] D. Salvagnin. Symmetry breaking inequalities from the Schreier-Sims table. In W.-J.
van Hoeve, editor, Integration of Constraint Programming, Artificial Intelligence, and
Operations Research, pages 521–529. Springer, 2018. doi:10.1007/978-3-319-93031-2_37.
[95] M. W. P. Savelsbergh. Preprocessing and probing techniques for mixed inte-
ger programming problems. ORSA Journal on Computing, 6(4):445–454, 1994.
doi:10.1287/ijoc.6.4.445.
[96] L. Schewe, M. Schmidt, and D. Weninger. A decomposition heuristic for mixed-
integer supply chain problems. Operations Research Letters, 48(3):225–232, 2020.
doi:10.1016/j.orl.2020.02.006.
[97] F. Serrano, R. Schwarz, and A. Gleixner. On the relation between the extended sup-
porting hyperplane algorithm and Kelley’s cutting plane algorithm. Journal of Global
Optimization, 78(1):161–179, 2020. doi:10.1007/s10898-020-00906-y.
[98] H. D. Sherali and C. H. Tuncbilek. A global optimization algorithm for polynomial
programming problems using a reformulation-linearization technique. Journal of Global
Optimization, 2(1):101–112, 1992. doi:10.1007/BF00121304.
[99] Y. Shinano. The Ubiquity Generator framework: 7 years of progress in parallelizing
branch-and-bound. In N. Kliewer, J. F. Ehmke, and R. Borndörfer, editors, Operations
Research Proceedings 2017, pages 143–149. Springer, 2018. doi:10.1007/978-3-319-89920-
6_20.
[100] Y. Shinano, T. Achterberg, T. Berthold, S. Heinz, and T. Koch. ParaSCIP: A parallel
extension of SCIP. In C. Bischof, H.-G. Hegering, W. E. Nagel, and G. Wittum, edi-
tors, Competence in High Performance Computing 2010, pages 135–148. Springer, 2012.
doi:10.1007/978-3-642-24025-6_12.
[101] Y. Shinano, T. Achterberg, T. Berthold, S. Heinz, T. Koch, and M. Winkler. Solving
open MIP instances with ParaSCIP on supercomputers using up to 80,000 cores. In
2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages
770–779, 2016. doi:10.1109/IPDPS.2016.56.
[102] Y. Shinano, T. Berthold, and S. Heinz. ParaXpress: an experimental extension of the
FICO Xpress-Optimizer to solve hard MIPs on supercomputers. Optimization Methods
and Software, 33(3):530–539, 2018. doi:10.1080/10556788.2018.1428602.
[103] Y. Shinano, S. Heinz, S. Vigerske, and M. Winkler. FiberSCIP: A shared mem-
ory parallelization of SCIP. INFORMS Journal on Computing, 30(1):11–30, 2018.
doi:10.1287/ijoc.2017.0762.
[104] E. Smith and C. Pantelides. A symbolic reformulation/spatial branch-and-bound algo-
rithm for the global optimisation of nonconvex MINLPs. Computers & Chemical Engi-
neering, 23(4-5):457–478, 1999. doi:10.1016/s0098-1354(98)00286-5.
88
[105] N. Tateiwa, Y. Shinano, S. Nakamura, A. Yoshida, S. Kaji, M. Yasuda, and K. Fujisawa.
Massive parallelization for finding shortest lattice vectors based on Ubiquity Generator
Framework. In SC20: International Conference for High Performance Computing, Net-
working, Storage and Analysis, pages 1–15, 2020. doi:10.1109/SC41405.2020.00064.
[106] N. Tateiwa, Y. Shinano, K. Yamamura, A. Yoshida, S. Kaji, M. Yasuda, and K. Fujisawa.
CMAP-LAP: Configurable massively parallel solver for lattice problems. ZIB-Report
21-16, Zuse Institute Berlin, 2021.
[107] N. Tateiwa, Y. Shinano, M. Yasuda, K. Yamamura, S. Kaji, and K. Fujisawa. Massively
parallel sharing lattice basis reduction. ZIB-Report 21-38, Zuse Institute Berlin, 2021.
[108] M. Tawarmalani and N. V. Sahinidis. A polyhedral branch-and-cut approach to global
optimization. Mathematical Programming, 103(2):225–249, 2005. doi:10.1007/s10107-005-
0581-8.
[109] H. Tuy. Concave programming with linear constraints. Doklady Akademii Nauk, 159(1):
32–35, 1964.
[110] S. Vahdati Daneshmand. Algorithmic approaches to the Steiner problem in networks. PhD
thesis, Universität Mannheim, 2004.
[111] F. Vanderbeck. Branching in branch-and-price: A generic scheme. Mathematical Pro-
gramming, 130(2):249–294, 2011. doi:10.1007/s10107-009-0334-1.
[112] J. P. Vielma, I. Dunning, J. Huchette, and M. Lubin. Extended formulations in mixed
integer conic quadratic programming. Mathematical Programming Computation, 9(3):
369–418, 2016. doi:10.1007/s12532-016-0113-y.
[113] S. Vigerske and A. Gleixner. SCIP: Global optimization of mixed-integer nonlinear pro-
grams in a branch-and-cut framework. Optimization Methods & Software, 33(3):563–593,
2017. doi:10.1080/10556788.2017.1335312.
[114] D. Villeneuve, J. Desrosiers, M. Lübbecke, and F. Soumis. On compact formulations for
integer programs solved by column generation. Annals of Operations Research, 139(1):
375–388, 2005. doi:10.1007/s10479-005-3455-9.
[115] F. Wegscheider. Exploiting symmetry in mixed-integer nonlinear programming. Master’s
thesis, Zuse Institute Berlin, 2019.
[116] F. Wesselmann and U. Suhl. Implementing cutting plane management and selection
techniques. Technical report, University of Paderborn, 2012.
[117] R. Wunderling. Paralleler und objektorientierter Simplex-Algorithmus. PhD thesis, Tech-
nische Universität Berlin, 1996.
[118] J. M. Zamora and I. E. Grossmann. Continuous global optimization of structured pro-
cess systems models. Computers and Chemical Engineering, 22(12):1749–1770, 1998.
doi:10.1016/S0098-1354(98)00244-0.
[119] M. Zhao, K. Huang, and B. Zeng. A polyhedral study on chance constrained pro-
gram with random right-hand side. Mathematical Programming, 166:19–64, 2017.
doi:10.1007/s10107-016-1103-6.
Author Affiliations
Ksenia Bestuzheva
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0002-7018-7099
Mathieu Besançon
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0002-6284-3033
Wei-Kun Chen
School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China
89
E-mail: [email protected]
ORCID: 0000-0003-4147-1346
Antonia Chmiela
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0002-4809-2958
Tim Donkiewicz
RWTH Aachen University, Lehrstuhl für Operations Research, Kackertstr. 7, 52072 Aachen,
Germany
E-mail: [email protected]
ORCID: 0000-0002-5721-3563
Leon Eifler
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0003-0245-9344
Oliver Gaul
RWTH Aachen University, Lehrstuhl für Operations Research, Kackertstr. 7, 52072 Aachen,
Germany
E-mail: [email protected]
ORCID: 0000-0002-2131-1911
Gerald Gamrath
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
and I2 DAMO GmbH, Englerallee 19, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0001-6141-5937
Ambros Gleixner
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0003-0391-5903
Leona Gottwald
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0002-8894-5011
Christoph Graczyk
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
Katrin Halbig
Friedrich-Alexander Universität Erlangen-Nürnberg, Department of Data Science, Cauerstr. 11,
91058 Erlangen, Germany
E-mail: [email protected]
ORCID: 0000-0002-8730-3447
Alexander Hoen
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
90
E-mail: [email protected]
ORCID: 0000-0003-1065-1651
Christopher Hojny
Technische Universiteit Eindhoven, Department of Mathematics and Computer Science, P.O.
Box 513, 5600 MB Eindhoven, The Netherlands
E-mail: [email protected]
ORCID: 0000-0002-5324-8996
Thorsten Koch
Technische Universität Berlin, Chair of Software and Algorithms for Discrete Optimization,
Straße des 17. Juni 135, 10623 Berlin, Germany, and
Zuse Institute Berlin, Department A2 IM, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0002-1967-0077
Marco Lübbecke
RWTH Aachen University, Lehrstuhl für Operations Research, Kackertstr. 7, 52072 Aachen,
Germany
E-mail: [email protected]
ORCID: 0000-0002-2635-0522
Stephen J. Maher
University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter,
United Kingdom
E-mail: [email protected]
ORCID: 0000-0003-3773-6882
Frederic Matter
Technische Universität Darmstadt, Fachbereich Mathematik, Dolivostr. 15, 64293 Darmstadt,
Germany
E-mail: [email protected]
ORCID: 0000-0002-0499-1820
Erik Mühmer
RWTH Aachen University, Lehrstuhl für Operations Research, Kackertstr. 7, 52072 Aachen
E-mail: [email protected]
ORCID: 0000-0003-1114-3800
Benjamin Müller
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0002-4463-2873
Marc E. Pfetsch
Technische Universität Darmstadt, Fachbereich Mathematik, Dolivostr. 15, 64293 Darmstadt,
Germany
E-mail: [email protected]
ORCID: 0000-0002-0947-7193
Daniel Rehfeldt
Zuse Institute Berlin, Department A2 IM, Takustr. 7, 14195 Berlin, Germany
91
E-mail: [email protected]
ORCID: 0000-0002-2877-074X
Steffan Schlein
RWTH Aachen University, Lehrstuhl für Operations Research, Kackertstr. 7, 52072 Aachen,
Germany
E-mail: [email protected]
Franziska Schlösser
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
Felipe Serrano
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0002-7892-3951
Yuji Shinano
Zuse Institute Berlin, Department A2 IM, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0002-2902-882X
Boro Sofranac
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany and
Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0003-2252-9469
Mark Turner
Zuse Institute Berlin, Department A2 IM, Takustr. 7, 14195 Berlin, Germany
and Chair of Software and Algorithms for Discrete Optimization, Institute of Mathematics,
Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0001-7270-1496
Stefan Vigerske
GAMS Software GmbH, c/o Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin,
Germany
E-mail: [email protected]
Fabian Wegscheider
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
Philipp Wellner
E-mail: [email protected]
Dieter Weninger
Friedrich-Alexander Universität Erlangen-Nürnberg, Department of Data Science, Cauerstr. 11,
91058 Erlangen, Germany
E-mail: [email protected]
ORCID: 0000-0002-1333-8591
Jakob Witzig
Zuse Institute Berlin, Department AIS2 T, Takustr. 7, 14195 Berlin, Germany
E-mail: [email protected]
ORCID: 0000-0003-2698-0767
92
Appendices
A Detailed Computational Results to Section 4.14 (Performance Impact of Up-
dates for Nonlinear Constraints)
The following table lists the results for running the classic code (SCIP 7) and the new code (SCIP 8) for each considered
instance of MINLPLib. Only results for the non-permuted instances are given.
Column “time/gap” gives the time it took to solve the instance to optimality (with respect to specified gap tolerances)
or the gap at termination if solving stopped at the time limit. If the time or gap of a version is not worse than the
other version, the time or gap is printed in a bold font. If a version did not return a result or the reported bounds
conflict with best known bounds, then “fail” is printed.
For the classic version, columns “quad”, “soc”, “abspow”, and “nonlin” give the number of quadratic, second-order
cone, abspower, and nonlinear constraints, respectively, after presolve. Due to the reformulations that are applied in
presolve, nonlinear constraints are sums of convex or concave functions, quadratic terms (including bilinear products) are
parts of quadratic constraint, unless an upgrade to a soc constraint was possible. Monomials of odd degree, signpowers,
and monomials of even degree with fixed sign are handled by the abspower constraint handler.
For the new version, columns “quad”, “bilin”, “soc”, “convex”, “concave”, “quot”, “persp”, “def” give the number
of expressions for which the detection algorithm of the nonlinear handlers quadratic, bilinear, soc, convex, concave,
quotient, perspective, and default, respectively (see Sections 4.3–4.8 and 4.2.6) reported success, that is, registered
themselve for domain propagation or separation after presolve. Recall that by default the quadratic nonlinear handler
only gets active for propagable quadratic expressions (see Section 4.3.1) and the convex and concave nonlinear handlers
only handle nontrivial expressions (Section 4.6.1). Further, the nonlinear handler for bilinear expressions (Section 4.5)
currently registers itself for any product of two non-binary variables (original or auxiliary) and only checks when called
later whether linear inequalities in the two variables are available, because the latter are computed after the extended
formulations are initialized. Columns “minor” and “rlt” report the number of cuts that were generated by the respective
separators (Sections 4.10 and 4.9) and got added to the LP. Here, if cuts were generated but not applied, a zero is
printed.
The last row summarizes on how many instances each constraint handler, nonlinear handler, or separator was used,
i.e., the number of nonzeros in each column.
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
93
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
94
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
95
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
96
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
97
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
98
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
faclay80 ∞ ∞
fdesign10 0.0s 1 0.0s 1 1 4
fdesign25 0.1s 1 0.0s 1 1 4
fdesign50 0.1s 1 0.1s 1 1 4
feedtray ∞ 134 18 72 2.9s 38 323 144 126 54 654 3
feedtray2 0.1s 147 0.1s 89 79 12 274
filter 0.0s 3 3 0.0s 1 2 2 2 3 11
fin2bb 8.8s 42 60.1s 42 21 189
flay02h 0.6s 2 0.4s 8
flay02m 0.4s 2 0.3s 8
flay03h 1.1s 3 1.3s 12
flay03m 0.7s 3 0.7s 12
flay04h 22.0s 4 7.3s 16
flay04m 4.0s 4 3.9s 16
flay05h 323.0s 5 356.1s 20
flay05m 110.9s 5 131.0s 20
flay06h 7.9% 6 12.4% 24
flay06m 3.8% 6 3.8% 24
flowchan100fix 0.4s 400 1.1s 800 3200
flowchan200fix 0.9s 800 fail
flowchan400fix 1.0s 1600 fail
flowchan50fix 0.2s 200 0.2s 400 1600
fo7 139.4s 14 160.6s 42
fo7_2 191.9s 14 53.1s 42
fo7_ar25_1 38.8s 14 44.5s 42
fo7_ar2_1 47.4s 14 38.7s 42
fo7_ar3_1 52.9s 14 169.3s 42
fo7_ar4_1 54.6s 14 65.9s 42
fo7_ar5_1 40.2s 14 42.9s 42
fo8 179.9s 16 171.7s 48
fo8_ar25_1 201.5s 16 240.0s 48
fo8_ar2_1 404.5s 16 374.2s 48
fo8_ar3_1 73.8s 16 145.6s 48
fo8_ar4_1 52.8s 16 144.5s 48
fo8_ar5_1 109.0s 16 151.8s 48
fo9 1082.0s 18 530.3s 54
fo9_ar25_1 19.2% 18 20.5% 54
fo9_ar2_1 1577.5s 18 2247.3s 54
fo9_ar3_1 149.6s 18 149.8s 54
fo9_ar4_1 218.1s 18 119.2s 54
fo9_ar5_1 331.5s 18 1793.3s 54
forest 43.1% 24 85.1s 23 90 1 205 11
fuel 0.0s 1 3 0.0s 4 18 3
gabriel01 2300.2s 48 263.4s 32 256 436
gabriel02 25.8% 96 2372.0s 64 512 782
gabriel04 4.9% 128 148.3s 96 752 1120
gabriel05 ∞ 192 ∞ 168 1284 1948
gabriel06 ∞ 640 ∞ 608 4592 6578
gabriel07 ∞ 800 ∞ 760 5740 8052
gabriel08 ∞ 1600 ∞ 1520 20600 25864
gabriel09 ∞ 288 ∞ 216 4284 5624
gabriel10 ∞ 3200 ∞ 3040 77680 90768
gams01 >1000% 110 111 >1000% 110 120 384
gams02 363% 1 192 990% 193 192 1070 24
gams03 >1000% 1 >1000% 1 50960 55042 3360 22
gancns ∞ 187 19 165 ∞ 43 142 37 18 23 665 150
gasnet 148% 61 13 23 75.2% 19 30 3 3 240 11
gasnet_al1 0.73% 256 10 35 0.59% 45 24 75 73 13 848 71 2
gasnet_al2 1.2% 256 10 35 0.55% 45 24 75 73 13 848 71 9
gasnet_al3 2% 256 10 35 0.36% 45 24 75 73 13 848 71 1
gasnet_al4 0.4% 256 10 35 1.4% 45 24 75 73 13 848 71 3
gasnet_al5 0.77% 256 10 35 2.3% 45 24 75 73 13 848 71 0
gasoil100 ∞ 2001 ∞ 401 1600 1 4604
gasoil200 ∞ 4001 ∞ 801 3200 1 9004
gasoil400 ∞ 8001 ∞ 1601 6400 1 17804
gasoil50 ∞ 1001 ∞ 201 800 1 2404
gasprod_sarawak01 2.6s 34 0.6s 17 34 123 3
gasprod_sarawak16 0.95% 544 0.39% 272 544 1968 55
gasprod_sarawak81 0.41% 2754 0.93% 1377 2754 9963 61
gastrans 0.1s 23 11 0.1s 111 1
gastrans040 40.4s 183 39 51 0.3s 72 119 13 38 38 50 456
gastrans135 ∞ 749 144 239 40.7s 316 582 21 164 164 239 2088
gastrans582_cold13 ∞ 1012 186 288 56.4s 227 290 116 93 93 123 1729 50
gastrans582_cold13_95 3.0s 1012 185 288 30.9s 209 267 129 85 85 113 1667 52 0
gastrans582_cold17 fail 1072 205 288 7.6s 247 305 111 109 109 126 1784 56
gastrans582_cold17_95 ∞ 1072 205 288 15.1s 246 305 111 109 109 126 1784 56
gastrans582_cool12 ∞ 1048 204 288 8.7s 235 303 120 106 106 126 1779 55
gastrans582_cool12_95 ∞ 1048 201 288 11.3s 236 303 120 106 106 126 1779 55
gastrans582_cool14 ∞ 1050 201 288 9.8s 235 299 119 104 104 125 1771 53
gastrans582_cool14_95 ∞ 1050 201 288 ∞ 207 252 132 90 90 103 1651 61
gastrans582_freezing27 54.4s 1094 205 261 8.5s 241 298 115 111 111 121 1790 47
gastrans582_freezing27_95 56.1s 1098 209 262 6.9s 187 194 128 95 95 74 1542 50 0
gastrans582_freezing30 ∞ 1100 209 262 81.6s 175 184 128 91 91 66 1520 47
gastrans582_freezing30_95 ∞ 1100 210 288 25.6s 200 230 128 95 95 89 1626 47 1
gastrans582_mild10 fail 1023 186 288 8.9s 226 297 118 95 95 126 1744 50
gastrans582_mild10_95 fail 1023 187 288 ∞ 226 297 118 95 95 126 1744 50
gastrans582_mild11 18.9s 1011 185 280 7.6s 228 290 122 98 98 122 1730 53
99
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
gastrans582_mild11_95 ∞ 1011 184 280 21.8s 198 243 135 91 90 101 1606 55
gastrans582_warm15 ∞ 1010 183 288 298.9s 229 296 114 92 92 126 1746 47
gastrans582_warm15_95 ∞ 1010 183 288 7.4s 229 296 114 92 92 126 1746 47
gastrans582_warm31 ∞ 1004 181 288 12.0s 188 242 130 83 83 100 1610 50
gastrans582_warm31_95 ∞ 1004 181 288 8.0s 231 303 117 91 91 130 1760 42
gastransnlp 0.0s 22 0.0s 88
gbd 0.0s 1 0.0s 4
gear 0.1s 4 16.2s 14
gear2 0.1s 4 18.0s 14
gear3 0.1s 4 15.8s 14
gear4 fail 3 0.7s 1 11
genpooling_lee1 7.4s 20 2.8s 20 24 84
genpooling_lee2 98.8s 30 5.5s 30 36 110
genpooling_meyer04 121% 15 42.9% 15 66 144
genpooling_meyer10 105% 33 62.4% 33 435 675
genpooling_meyer15 102% 48 52.8% 48 990 1420
ghg_1veh 13.2s 47 3 11 17.1s 9 18 6 4 2 104 17
ghg_2veh 5.9% 119 6 23 5% 34 38 1 12 10 3 261 11 6
ghg_3veh 53.0% 210 9 36 29.1% 39 58 22 16 5 453 5852 3
gilbert 0.1s 2 1.0s 2003
gkocis 0.0s 2 0.0s 2 2 10 2
glider100 ∞ 1501 200 400 ∞ 600 706 99 200 101 4 4516
glider200 ∞ 3001 400 800 ∞ 1200 1406 199 400 201 4 9016
glider400 ∞ 6000 801 1600 ∞ 2400 2806 399 800 401 4 18016
glider50 ∞ 751 100 200 ∞ 300 356 49 100 51 4 2266
graphpart_2g-0044-1601 0.1s 0.1s
graphpart_2g-0055-0062 0.3s 1.2s
graphpart_2g-0066-0066 0.9s 5.8s
graphpart_2g-0077-0077 1.3s 5.2s
graphpart_2g-0088-0088 1.7s 3.2s
graphpart_2g-0099-9211 4.4s 11.1s
graphpart_2g-1010-0824 1.7s 10.3s
graphpart_2pm-0044-0044 0.2s 0.1s
graphpart_2pm-0055-0055 0.3s 1.4s
graphpart_2pm-0066-0066 0.8s 1.3s
graphpart_2pm-0077-0777 1.9s 2.5s
graphpart_2pm-0088-0888 1.4s 4.7s
graphpart_2pm-0099-0999 4.0s 8.2s
graphpart_3g-0234-0234 0.6s 3.1s
graphpart_3g-0244-0244 0.8s 1.6s
graphpart_3g-0333-0333 0.9s 1.5s
graphpart_3g-0334-0334 2.8s 3.0s
graphpart_3g-0344-0344 2.3s 4.3s
graphpart_3g-0444-0444 12.7s 18.1s
graphpart_3pm-0234-0234 0.7s 3.3s
graphpart_3pm-0244-0244 1.2s 1.7s
graphpart_3pm-0333-0333 1.2s 1.0s
graphpart_3pm-0334-0334 3.5s 2.8s
graphpart_3pm-0344-0344 12.1s 14.9s
graphpart_3pm-0444-0444 51.6s 97.0s
graphpart_clique-20 2.4s 4.4s
graphpart_clique-30 16.5s 15.8s
graphpart_clique-40 126.5s 113.3s
graphpart_clique-50 1294.6s 673.3s
graphpart_clique-60 87.2% 3503.6s
graphpart_clique-70 149% 76.1%
gsg_0001 93.9s 1 20 38.4s 1 20 94
gtm 0.0s 1 0.0s 6 41
hadamard_4 0.2s 1.6s
hadamard_5 36.4s 22.4s
hadamard_6 478% 476%
hadamard_7 >1000% >1000%
hadamard_8 ∞ ∞
hadamard_9 fail ∞
harker 0.2s 1 0.0s 1 42
haverly 0.6s 3 0.1s 3 2 12
hda 213% 155 3 62 10.7% 17 135 78 75 8 783 15 0
heatexch_gen1 0% 56 12 88.4% 40 60 24 24 40 256
heatexch_gen2 0.76% 54 17 18.6% 37 77 32 48 52 335 8 0
heatexch_gen3 228.3s 290 61 141% 160 470 120 180 220 1657 26 0
heatexch_spec1 18.8% 29 12 80.4s 5 11 18 2 7 94 6
heatexch_spec2 46.5% 42 17 20.1s 17 16 26 20 15 164 12
heatexch_spec3 >1000% 170 61 0.91% 10 60 110 55 50 659 35
heatexch_trigen 712.9s 60 24 1.5% 9 18 27 21 20 182
hhfair fail fail
himmel11 0.0s 4 0.0s 3 7 22
himmel16 4.6s 19 5.3s 10 7 10 37 2621
hmittelman 0.0s 0.0s
house 0.5s 3 0.3s 1 3 1 12
hs62 0.01% 4 1 0.01% 3 9 4 2 7 26 521
hvb11 53.5s 17 27.7s 16 82
hybriddynamic_fixed 0.0s 0.0s
hybriddynamic_fixedcc fail 1 fail 1 18 69
hybriddynamic_var 1.2s 15 8 2.3s 3 9 49 1
hybriddynamic_varcc fail 39 2 fail 1 58 169
hydro fail 0.0s 6 38
hydroenergy1 0.03% 46 0.04% 46 69 253
100
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
101
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
m7 4.4s 14 3.3s 42
m7_ar25_1 1.7s 14 3.4s 42
m7_ar2_1 16.4s 14 12.7s 42
m7_ar3_1 12.9s 14 11.0s 42
m7_ar4_1 3.5s 14 2.1s 42
m7_ar5_1 8.7s 14 13.7s 42
mathopt1 0.0s 2 0.1s 2 2 10 10 0
mathopt2 0.0s 1 1 0.0s
mathopt5_4 0.0s 1 1 1 2.0s 1 10
mathopt5_7 0.2s 1 0.2s 1 7
mathopt5_8 0.2s 1 0.1s 1 6
maxcsp-ehi-85-297-12 ∞ ∞
maxcsp-ehi-85-297-36 ∞ ∞
maxcsp-ehi-85-297-71 ∞ ∞
maxcsp-ehi-90-315-70 ∞ ∞
maxcsp-geo50-20-d4-75-36 160.6s 61.7s
maxcsp-langford-3-11 ∞ ∞
maxmin 81.0% 78 78 40.6% 66 132 78 349 51364 2080
maxmineig2 fail 90 fail 294 574
mbtd 220% 133%
meanvar 0.0s 1 0.0s 1 1 8
meanvar-orl400_05_e_7 4.3% 1 400 0.7% 1 400 1 2401
meanvar-orl400_05_e_8 87.5s 1 246.7s 1 1 250 83
meanvarx 0.0s 1 0.1s 1 1 8
meanvarxsc 0.1s 1 0.1s 1 1 8
methanol100 ∞ 4501 ∞ 601 2400 1 7098
methanol200 ∞ 9001 ∞ 1201 4800 1 13998
methanol400 ∞ 18001 ∞ 2401 9600 1 27798
methanol50 ∞ 2251 ∞ 301 1200 1 3636
mhw4d ∞ 4 2 1 20.1s 1 1 3 18
milinfract 33.7s 1 270% 1 500 1000 2004
minlphi fail 0.7s 3 4 4 4 24
minlphix fail fail
minsurf100 ∞ 10300 1 fail 10100 102 9998 200 15702
minsurf25 155% 2650 1 fail 2600 102 2498 50 4002
minsurf50 ∞ 5200 1 fail 5100 102 4998 100 7902
minsurf75 ∞ 7750 1 352% 7600 102 7498 150 11802
multiplants_mtg1a 856.0s 32 3 13.1% 25 34 175
multiplants_mtg1b 194% 32 3 503% 22 33 168
multiplants_mtg1c 582% 32 3 828% 25 34 202
multiplants_mtg2 11.2% 42 4 2.1% 33 45 231
multiplants_mtg5 13.2% 53 3 20.6% 43 51 207
multiplants_mtg6 15.4% 70 4 31.9% 61 69 379
multiplants_stg1 ∞ 67 30 ∞ 1 67 347
multiplants_stg1a ∞ 49 21 >1000% 1 49 313
multiplants_stg1b ∞ 55 24 >1000% 1 55 364
multiplants_stg1c fail >1000% 1 43 338
multiplants_stg5 ∞ 49 21 ∞ 1 49 302
multiplants_stg6 ∞ 65 28 ∞ 1 65 481
nd_netgen-2000-2-5-a-a-ns_7 0% 1999 128% 1999 11994
nd_netgen-2000-3-4-b-a-ns_7 219.1s 1988 59.4s 1988 11928
nd_netgen-3000-1-1-b-b-ns_7 612.9s 3000 43.1s 3000 18000
ndcc12 ∞ 46 ∞ 44 528 1144 3
ndcc12persp ∞ 46 ∞ 44 43 1 294
ndcc13 27.4% 42 ∞ 42 481 5 1106 8
ndcc13persp fail 44.7% 42 28 14 224
ndcc14 ∞ 54 81.7% 54 756 1620 2
ndcc14persp 81.2% 54 ∞ 54 52 2 365
ndcc15 29.4% 40 ∞ 36 540 1152 6
ndcc15persp ∞ 40 ∞ 36 31 5 227
ndcc16 ∞ 60 85.4% 60 960 2040
ndcc16persp ∞ 60 ∞ 60 60 420
nemhaus 0.0s 0.0s
netmod_dol1 13.3% 1 85.4%
netmod_dol2 40.9s 1 41.2s 14
netmod_kar1 15.0s 1 6.1s 10
netmod_kar2 5.6s 1 6.0s 10
ngone >1000% 4951 >1000% 4852 195 4852 690 60898
no7_ar25_1 55.1s 14 69.3s 42
no7_ar2_1 115.6s 14 26.1s 42
no7_ar3_1 165.7s 14 226.9s 42
no7_ar4_1 434.5s 14 325.0s 42
no7_ar5_1 130.1s 14 167.4s 42
nous1 16.8% 29 12.2s 8 50 124 12
nous2 2.1s 29 0.5s 8 50 124 12
nuclear104 ∞ 3221 ∞ 1976 3130 37236
nuclear10a ∞ 3130 ∞ 1976 3130 29046 12
nuclear10b >1000% 3026 ∞ 1976 3130 7206
nuclear14 ∞ 602 ∞ 360 584 2844
nuclear14a >1000% 584 998% 360 584 2544 19
nuclear14b 77.4% 560 105% 360 584 1344
nuclear25 ∞ 628 ∞ 375 608 3089
nuclear25a ∞ 608 >1000% 375 608 2699 48
nuclear25b 635% 583 103% 375 608 1399
nuclear49 ∞ 1374 ∞ 833 1332 9715
nuclear49a >1000% 1332 ∞ 833 1332 7965 43
nuclear49b ∞ 1283 143% 833 1332 3065
102
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
103
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
104
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
105
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
106
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
107
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
108
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
109
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
110
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
111
classic new
instance time/gap quad soc abspow nonlin time/gap quad bilin soc convex concave quot def persp minor rlt
Usage count (#instances: 1678): 1200 43 205 546 918 920 129 529 324 256 1435 250 162 266
112
B Detailed Computational Results to Section 10.2 on Strong Branching in GCG
The following table gives the time in seconds needed to solve each problem instance with original variable branching.
Entries that perform better than pseudocost are italic, the best entry in each row is in bold face.
pseudo- random most- SBw/oCG SBw/CG hierar- hybrid reli- rel. hybrid
cost frac chical able hier. hier.
12Cap10 237.7 3433.9 841.7 2071.0 1245.0 223.7 635.1 3197.4 126.9 174.8
14Cap10 100.9 60.5 176.4 452.1 401.3 59.3 213.9 60.2 135.7 93.2
20Cap10 171.2 613.6 402.4 1569.5 1043.5 117.0 458.3 625.8 129.5 87.7
NU_1_0010_05_3 40.9 41.2 41.7 44.5 48.5 47.2 47.5 46.7 41.4 45.1
NU_1_0010_05_7 23.5 49.4 23.7 36.7 30.4 181.5 27.1 27.6 40.4 32.4
NU_3_0010_05_1 8.8 29.7 9.2 17.6 11.7 7.8 13.2 30.1 12.8 12.5
NU_3_0010_05_3 20.5 63.3 19.6 60.2 26.1 23.8 29.7 63.9 21.5 13.6
NU_3_0010_05_5 17.3 28.9 18.7 27.4 4.9 12.0 8.5 6.5 9.7 13.1
NU_3_0010_05_7 19.0 19.3 19.6 19.7 25.9 30.1 27.8 19.6 20.5 23.1
NU_3_0010_05_9 10.2 26.5 16.3 19.1 11.0 13.9 14.0 26.2 15.2 8.0
U_1_0050_05_0 5.3 5.7 5.4 5.6 5.5 5.4 5.9 5.7 5.7 5.7
U_1_0050_05_2 150.1 210.8 67.0 486.4 147.4 128.1 95.6 212.1 19.0 297.1
U_1_0050_25_7 >3600.0 135.9 >3600.0 >3600.0 707.9 1019.8 140.5 137.1 >3600.0 48.1
U_1_0100_05_1 372.1 277.5 563.9 279.4 278.1 206.8 136.3 281.1 186.5 11.6
U_1_0100_05_3 2653.9 2618.3 2658.0 2667.3 2608.4 2602.4 2653.4 2611.4 2640.6 2632.0
U_2_0050_05_4 305.7 48.2 158.6 251.2 164.8 198.4 1344.1 102.6 164.0 74.9
U_2_0100_05_2 >3600.0 847.0 >3600.0 846.4 >3600.0 >3600.0 >3600.0 844.2 >3600.0 >3600.0
U_2_0100_05_5 623.7 382.0 >3600.0 455.2 378.1 1582.8 2917.6 1865.2 1275.8 221.3
U_3_0010_05_2 12.2 386.1 20.3 52.1 5.7 13.9 5.9 6.4 18.3 13.3
U_3_0010_05_5 11.0 72.3 24.7 35.6 21.9 25.3 27.9 18.1 12.4 12.8
d25_03_alternative 336.7 746.8 510.2 839.4 1675.3 1080.2 1095.6 1192.2 2099.9 837.5
d25_06 36.3 449.8 645.0 17.4 76.4 64.3 68.5 69.8 84.8 30.5
d25_06_alternative 121.6 237.9 174.5 50.3 184.9 144.7 144.7 239.2 137.8 125.9
d25_08 89.6 37.4 159.4 34.5 219.5 132.1 64.6 37.3 109.6 42.5
gapd_3.min 1838.8 2353.7 209.9 >3600.0 3450.6 1138.6 >3600.0 1425.9 2545.6 664.3
p10100-11-115.gq 85.5 442.5 119.0 893.7 1828.2 271.1 808.8 669.5 164.0 133.3
p10100-13-105.gq 552.7 >3600.0 414.1 1742.9 >3600.0 995.4 1555.4 724.2 952.5 269.1
p10100-18-115.gq 363.3 230.8 539.2 1299.9 >3600.0 568.0 978.7 710.3 308.0 383.9
p1250-10.eq 54.3 277.4 162.4 95.3 139.6 30.7 78.0 269.8 54.5 37.8
p1250-10.gq 200.5 617.2 250.1 187.0 169.2 41.2 94.9 73.3 199.6 46.9
p1250-6.gq 23.5 24.9 13.3 50.1 46.4 34.2 36.7 15.9 23.9 12.8
p1250-7.gq 57.7 >3600.0 225.9 113.9 140.0 80.1 66.2 108.1 80.9 77.2
p1650-10.gq 79.1 163.1 238.8 125.6 157.1 55.5 60.3 164.4 80.6 42.5
p2050-10.gq >3600.0 484.7 86.8 31.9 33.8 23.2 40.5 484.2 18.1 14.3
p2050-8.eq 59.2 115.3 27.1 53.8 36.5 20.4 45.8 113.6 24.2 21.6
p2050-8.gq 17.3 70.4 19.4 50.6 45.2 19.9 53.9 74.8 26.0 30.6
p23-6.eq 42.2 32.4 45.7 45.6 20.2 20.5 20.5 13.0 41.1 7.9
p23-6.gq 25.4 26.4 21.6 18.6 20.0 14.8 16.8 26.2 14.6 10.5
p25100-11.gq 353.4 3105.8 315.3 536.1 503.8 519.8 286.4 685.5 160.6 175.0
p25100-12.gq 73.6 83.0 138.2 252.5 106.4 69.5 66.5 56.4 216.9 60.5
p25100-15.gq 51.9 217.3 30.9 250.1 202.9 98.8 140.3 97.1 65.0 78.6
p33100-11.eq 40.0 25.9 43.7 219.8 168.3 69.2 148.6 26.3 43.1 32.2
p33100-11.gq 38.8 28.1 39.6 152.4 93.3 47.2 101.6 44.2 36.1 43.2
p33100-20.gq 87.3 52.9 133.0 190.3 446.2 122.7 246.2 181.7 88.0 139.5
p40100-11-110.eq 67.2 782.5 63.6 175.2 158.5 80.9 132.2 775.8 101.0 87.0
p40100-11.eq >3600.0 >3600.0 >3600.0 323.2 103.3 75.2 132.0 185.9 95.6 2585.3
p40100-12-110.eq 18.5 771.4 15.0 220.5 411.8 71.8 287.5 773.2 19.3 60.1
p40100-14-115.gq >3600.0 >3600.0 >3600.0 123.7 174.6 64.7 141.6 70.1 >3600.0 537.6
p40100-15.gq 1413.5 661.5 185.1 252.7 128.5 65.6 119.2 677.6 60.8 36.5
p40100-19.eq 60.6 807.8 63.1 719.7 516.5 312.5 509.0 815.1 123.0 84.1
p40100-20.eq 68.4 93.6 71.2 144.7 132.3 78.2 134.6 94.0 69.2 44.7
p550-8.gq 732.6 >3600.0 489.2 877.4 2703.3 525.2 465.6 >3600.0 377.7 754.5
prob1_050_040_060_005_015_02 219.4 218.1 227.0 220.1 220.2 298.9 262.5 220.3 253.5 251.8
prob1_050_040_060_005_015_04 398.6 462.7 464.9 379.1 581.9 564.5 558.4 465.7 585.9 560.0
prob1_050_040_060_025_035_10 370.2 364.6 370.8 412.2 1346.1 521.9 512.5 522.0 623.5 372.2
prob1_050_090_110_005_015_03 214.5 256.4 414.8 856.4 818.1 324.1 321.4 391.5 319.0 321.1
prob1_050_090_110_015_025_01 311.5 300.4 334.2 793.3 2554.7 844.5 565.9 302.6 742.0 741.9
prob1_050_090_110_025_035_07 228.8 226.4 225.8 652.8 >3600.0 2761.8 2725.3 226.7 2787.7 2792.9
prob1_050_090_110_035_045_07 302.4 274.9 270.2 326.7 1627.8 476.9 1033.0 406.1 429.9 289.5
prob2_050_040_060_015_025_07 225.2 248.4 246.2 374.9 >3600.0 1363.9 1096.5 1566.2 2239.7 206.4
prob2_050_040_060_015_025_10 368.0 232.3 233.4 497.4 >3600.0 1753.2 >3600.0 232.4 1729.8 273.1
prob2_050_040_060_035_045_03 253.2 426.2 428.1 435.2 >3600.0 1130.1 >3600.0 2002.1 1263.0 251.5
prob2_050_040_060_035_045_06 438.2 1009.8 1002.4 644.6 >3600.0 2022.9 3403.9 991.3 702.4 514.8
prob2_050_090_110_035_045_06 277.6 452.7 638.3 782.5 >3600.0 1137.5 >3600.0 459.9 335.4 412.0
prob3_050_040_060_005_015_01 260.8 465.4 458.3 1922.2 >3600.0 3569.5 1293.0 460.8 3342.2 433.6
prob3_050_040_060_015_025_03 322.5 436.7 435.6 813.7 >3600.0 1780.2 2268.6 >3600.0 1703.5 236.7
prob3_050_040_060_015_025_09 421.5 500.7 493.1 1160.7 >3600.0 1382.3 >3600.0 496.0 277.2 269.7
prob3_050_040_060_035_045_10 233.7 236.9 236.6 531.4 >3600.0 877.1 2152.4 239.2 879.9 212.1
prob3_050_090_110_005_015_05 239.1 244.4 243.2 1164.9 >3600.0 1195.9 597.3 1882.2 145.7 203.7
prob3_050_090_110_015_025_01 281.9 166.8 166.8 540.3 >3600.0 822.1 1195.6 3538.1 272.0 299.1
prob3_050_090_110_025_035_10 309.6 475.2 329.3 1072.2 >3600.0 1201.2 1057.8 >3600.0 252.4 254.3
prob3_050_090_110_035_045_01 153.9 194.0 155.2 669.6 1749.5 82.5 472.9 177.4 81.8 143.7
prob3_050_090_110_035_045_06 445.1 596.1 610.3 1975.9 >3600.0 2200.2 >3600.0 >3600.0 375.3 484.1
# best 14 8 8 4 1 8 1 4 6 22
# timeouts 5 5 5 2 17 1 7 4 3 1
geom. mean 149.9 245.1 167.3 248.9 343.0 184.6 246.7 224.5 162.1 108.4
arithm. mean 493.8 658.6 500.0 587.8 1251.5 593.8 844.7 685.8 595.1 335.0
total (73) 36 049 48 080 36 502 42 912 91 357 43 347 61 660 50 062 43 439 24 452
113
The following table gives the number of nodes needed to solve each problem instance with original variable branching.
The best entry in each row except full strong entries is in bold face.
pseudo- random most- SBw/oCG SBw/CG hierar- hybrid reli- rel. hybrid
cost frac chical able hier. hier.
12Cap10 1 009 7 083 2 234 351 137 186 143 7 083 137 280
14Cap10 643 422 1 056 102 52 68 47 422 643 128
20Cap10 795 2 575 1 569 318 112 116 95 2 575 795 135
NU_1_0010_05_3 4 672 4 439 4 672 4 352 4 397 4 397 4 397 4 397 4 439 4 284
NU_1_0010_05_7 2 495 4 576 2 495 3 713 2 643 12 338 2 652 2 643 3 730 3 278
NU_3_0010_05_1 1 368 4 229 1 490 1 958 415 566 800 4 229 920 2 253
NU_3_0010_05_3 3 135 9 291 3 261 7 696 1 293 3 437 3 082 9 291 3 061 2 581
NU_3_0010_05_5 2 607 4 055 2 488 2 477 132 897 288 174 837 1 441
NU_3_0010_05_7 1 207 2 591 1 479 979 376 859 711 2 591 1 375 1 389
NU_3_0010_05_9 1 993 3 444 2 669 2 432 544 1 862 1657 3 444 2 234 1 473
U_1_0050_05_0 325 325 325 325 325 325 325 325 325 325
U_1_0050_05_2 8 838 17 396 4 441 36 523 10 365 9 005 6 931 17 396 1 085 19 868
U_1_0050_25_7 >258 966 4 897 >264 048 >14 046 1 062 2 268 4 367 4 897 >194 963 402
U_1_0100_05_1 19 639 12 568 26 890 12 563 12 563 9 957 6 617 12 563 9 126 661
U_1_0100_05_3 129 326 129 326 129 326 129 326 129 326 129 326 129 326 129 326 129 326 129 326
U_2_0050_05_4 32 297 4 384 17 003 14 893 5 907 12 635 110 850 3 518 10 202 7 620
U_2_0100_05_2 >109 482 25 367 >116 151 24 166 >90 663 >103 350 >113 381 25 367 >141 143 >109 421
U_2_0100_05_5 49 779 28 075 >176 504 15 139 9 910 66 956 117 682 5 1078 48 038 15 838
U_3_0010_05_2 2 537 84 845 4 749 8 435 225 2 675 358 244 3 925 2 941
U_3_0010_05_5 1 876 8 944 3 961 4 882 1 256 3 853 3 832 1 100 1 996 2 961
d25_03_alternative 21 260 46 586 31 697 19 854 12 852 9 287 26 475 10 834 18 637 49 201
d25_06 795 9 663 11 073 194 168 156 169 206 192 618
d25_06_alternative 4 451 8 289 6 529 1 409 1 419 1 352 1 557 8 289 5 075 4 982
d25_08 1 891 824 4 016 222 248 234 737 824 208 862
gapd_3.min 10 461 9 512 1 629 >484 440 595 >580 798 927 2 886
p10100-11-115.gq 855 2 566 1 088 196 151 138 175 522 1 664 519
p10100-13-105.gq 1 635 >6 212 2 020 168 >67 271 108 318 253 1 112
p10100-18-115.gq 1 194 984 1 440 299 >158 271 389 266 195 1 205
p1250-10.eq 2 012 4 720 2 844 331 211 195 287 4 720 2 012 1 083
p1250-10.gq 3 919 5 686 2 412 331 171 174 271 373 3 919 783
p1250-6.gq 1 067 1 359 659 171 95 143 209 151 1 067 336
p1250-7.gq 1 749 >15 329 2 620 480 332 424 1 277 799 465 1 426
p1650-10.gq 2 907 4 648 3 138 539 420 304 411 4 648 2 907 1 636
p2050-10.gq >10 266 4 988 1 983 136 69 97 86 4 988 87 160
p2050-8.eq 1 376 2 840 950 299 139 144 154 2 840 180 584
p2050-8.gq 826 2 658 960 269 135 112 161 2 658 146 817
p23-6.eq 1 195 1 345 1 279 348 123 187 259 147 1 313 264
p23-6.gq 1 266 1 321 1 079 173 129 161 175 1 321 131 307
p25100-11.gq 19 308 145 400 18 307 988 506 1 125 1 685 2 329 448 7 212
p25100-12.gq 4 680 5 131 8 632 543 121 174 146 206 11 938 1 376
p25100-15.gq 867 2 219 438 299 122 183 162 284 140 575
p33100-11.eq 2 046 1 146 1 939 314 143 103 159 1 146 2 171 339
p33100-11.gq 1 811 1 282 1 973 252 73 106 118 117 1 785 492
p33100-20.gq 1 483 1 116 1 022 326 151 140 518 527 1 483 2 490
p40100-11-110.eq 2 365 7 009 2 259 171 132 165 98 7 009 181 1 459
p40100-11.eq >18 707 >18 357 >16 927 399 80 107 128 613 154 7 240
p40100-12-110.eq 641 6 192 463 154 133 90 119 6 192 659 263
p40100-14-115.gq >6 929 >8 678 >6 994 144 112 121 152 221 >6 950 2 790
p40100-15.gq 3 117 3 923 1 641 275 73 129 141 3 923 87 262
p40100-19.eq 3 049 38 708 3 184 603 265 333 5 858 38 708 191 1 597
p40100-20.eq 1 538 1 804 1 244 158 101 122 105 1 804 1 538 270
p550-8.gq 2 183 >6 661 1 985 507 439 449 549 >6 648 485 2 706
prob1_050_040_060_005_015_02 1 049 1 049 1 049 1 049 1 049 1 154 1 154 1 049 1 154 1 154
prob1_050_040_060_005_015_04 1 804 2 204 2 204 1 793 1 823 2 337 2337 2 204 2 337 2 337
prob1_050_040_060_025_035_10 1 716 1 712 1 712 1 712 1 728 1 853 1 780 1 687 1 870 1 726
prob1_050_090_110_005_015_03 1 047 1 183 2 150 2 240 1 616 1 400 1 400 1 610 1 400 1 400
prob1_050_090_110_015_025_01 1 626 1 928 1 928 1 928 >432 1 310 684 1 928 1 266 1 638
prob1_050_090_110_025_035_07 1 668 1 668 1 668 1 116 >584 2 594 2 646 1 668 2 594 2 594
prob1_050_090_110_035_045_07 1 265 1 247 1 247 1 249 1 253 1 483 1 420 1 230 1 474 1 255
prob2_050_040_060_015_025_07 1 276 1 517 1 517 1 516 >603 3 418 935 1 510 6 275 1 117
prob2_050_040_060_015_025_10 1 657 1 066 1 066 1 479 >168 1 846 >375 1 066 1 755 1 162
prob2_050_040_060_035_045_03 1 171 1 809 1 809 1 429 >181 1 298 >721 1 031 1 233 1 120
prob2_050_040_060_035_045_06 2 018 4 957 4 933 2 294 >352 2 976 751 4 957 2 520 2 245
prob2_050_090_110_035_045_06 1 145 2 021 2 711 1 466 >161 1 163 >382 2 021 1 116 1 602
prob3_050_040_060_005_015_01 2 151 4 027 3 948 3 817 >694 4 348 496 4 027 3 980 3 636
prob3_050_040_060_015_025_03 2 199 3 606 3 606 3 628 >544 2 864 869 >2 887 2 655 1 626
prob3_050_040_060_015_025_09 2 794 3 931 3 931 3 918 >311 1 963 >594 3 931 1 890 1 923
prob3_050_040_060_035_045_10 1 240 1 280 1 423 1 280 1 320 1 543 1 087 1 280 1 549 1 543
prob3_050_090_110_005_015_05 1 387 1 789 1 789 1 789 >162 1 539 90 1 627 997 1 435
prob3_050_090_110_015_025_01 1 730 993 993 1 035 >102 842 171 995 1 641 1 834
prob3_050_090_110_025_035_10 2 053 3 729 2 563 2 561 >264 1 723 180 >2 823 1 855 1 865
prob3_050_090_110_035_045_01 1 119 1 691 1 198 1 116 129 95 72 129 93 1 087
prob3_050_090_110_035_045_06 2 996 4 413 4 413 4 412 >169 2 873 >613 >2 637 2 499 3 293
geom. mean 2 689.8 4 020.1 3 010.5 1 089.2 418.7 799.3 674.2 1 801.1 1 413.3 1 507.0
arithm. mean 11 010.5 10 645.8 13 111.1 4 904.5 4 173.8 5 792.3 7 804.3 6 019.0 9 151.2 6 018.4
total (73) 803 766 777 147 957 108 358 032 304 687 422 841 569 716 439 389 668 041 439 343
114