Mic2003 LB
Mic2003 LB
Mic2003 LB
Andrea Lodi
University of Bologna, Italy
[email protected]
• However, the exact solution of the resulting models often cannot be carried out for the problem
sizes of interest in real-world applications, hence one is interested in effective heuristic methods.
• However, the exact solution of the resulting models often cannot be carried out for the problem
sizes of interest in real-world applications, hence one is interested in effective heuristic methods.
• Current MIP solvers nowadays incorporate most of the theoretical and practical results in the
field of Integer Programming:
general-purpose cutting planes, preprocessing, branching mechanisms, pricing methods, . . .
• However, the exact solution of the resulting models often cannot be carried out for the problem
sizes of interest in real-world applications, hence one is interested in effective heuristic methods.
• Current MIP solvers nowadays incorporate most of the theoretical and practical results in the
field of Integer Programming:
general-purpose cutting planes, preprocessing, branching mechanisms, pricing methods, . . .
• MIP solvers used without proof of optimality are sometimes among the best heuristics.
• However, the exact solution of the resulting models often cannot be carried out for the problem
sizes of interest in real-world applications, hence one is interested in effective heuristic methods.
• Current MIP solvers nowadays incorporate most of the theoretical and practical results in the
field of Integer Programming:
general-purpose cutting planes, preprocessing, branching mechanisms, pricing methods, . . .
• MIP solvers used without proof of optimality are sometimes among the best heuristics.
• The same holds for truncated special-purpose branch-and-bound methods for problems with an
exponential number of constraints, e.g., Asymmetric Traveling Salesman Problem [Zhang, 2002]
• However, the exact solution of the resulting models often cannot be carried out for the problem
sizes of interest in real-world applications, hence one is interested in effective heuristic methods.
• Current MIP solvers nowadays incorporate most of the theoretical and practical results in the
field of Integer Programming:
general-purpose cutting planes, preprocessing, branching mechanisms, pricing methods, . . .
• MIP solvers used without proof of optimality are sometimes among the best heuristics.
• The same holds for truncated special-purpose branch-and-bound methods for problems with an
exponential number of constraints, e.g., Asymmetric Traveling Salesman Problem [Zhang, 2002]
AIM: integrating local search and metaheuristic ideas within Mixed Integer Programming
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
1. the solution of a continuous relaxation x∗ is “analyzed”;
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
1. the solution of a continuous relaxation x∗ is “analyzed”;
2. some of its nonzero variables are heuristically rounded-up to the nearest integer (if
non-integer) and then fixed to this value;
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
1. the solution of a continuous relaxation x∗ is “analyzed”;
2. some of its nonzero variables are heuristically rounded-up to the nearest integer (if
non-integer) and then fixed to this value;
3. the method is iterated until either a feasible solution is found or the problem is infeasible.
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
1. the solution of a continuous relaxation x∗ is “analyzed”;
2. some of its nonzero variables are heuristically rounded-up to the nearest integer (if
non-integer) and then fixed to this value;
3. the method is iterated until either a feasible solution is found or the problem is infeasible.
• The obvious question related to this mechanism is however:
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
1. the solution of a continuous relaxation x∗ is “analyzed”;
2. some of its nonzero variables are heuristically rounded-up to the nearest integer (if
non-integer) and then fixed to this value;
3. the method is iterated until either a feasible solution is found or the problem is infeasible.
• The obvious question related to this mechanism is however:
How should one choose the actual variables to be fixed?
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
1. the solution of a continuous relaxation x∗ is “analyzed”;
2. some of its nonzero variables are heuristically rounded-up to the nearest integer (if
non-integer) and then fixed to this value;
3. the method is iterated until either a feasible solution is found or the problem is infeasible.
• The obvious question related to this mechanism is however:
How should one choose the actual variables to be fixed?
• The idea is simple. In a binary problem in which a current feasible solution x̄ is given, impose a
soft variable fixing constraint, fixing a relevant number of variables without losing the
possibility of finding good feasible solutions:
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
1. the solution of a continuous relaxation x∗ is “analyzed”;
2. some of its nonzero variables are heuristically rounded-up to the nearest integer (if
non-integer) and then fixed to this value;
3. the method is iterated until either a feasible solution is found or the problem is infeasible.
• The obvious question related to this mechanism is however:
How should one choose the actual variables to be fixed?
• The idea is simple. In a binary problem in which a current feasible solution x̄ is given, impose a
soft variable fixing constraint, fixing a relevant number of variables without losing the
possibility of finding good feasible solutions:
n
X n
X
x̄j xj ≥ d0.9 x̄j e (1)
j=1 j=1
• A commonly used heuristic idea in MIP context is the so-called hard variable fixing or diving :
1. the solution of a continuous relaxation x∗ is “analyzed”;
2. some of its nonzero variables are heuristically rounded-up to the nearest integer (if
non-integer) and then fixed to this value;
3. the method is iterated until either a feasible solution is found or the problem is infeasible.
• The obvious question related to this mechanism is however:
How should one choose the actual variables to be fixed?
• The idea is simple. In a binary problem in which a current feasible solution x̄ is given, impose a
soft variable fixing constraint, fixing a relevant number of variables without losing the
possibility of finding good feasible solutions:
n
X n
X
x̄j xj ≥ d0.9 x̄j e (1)
j=1 j=1
• Constraint (1) defines a neighborhood of x̄, and, since the constraint is linear, the
neighborhood can be explored using a generic MIP solver.
(P ) min cT x (2)
Ax ≥ b (3)
xj ∈ {0, 1} ∀j ∈ B 6= ∅ (4)
xj ≥ 0, integer ∀j ∈ G (5)
xj ≥ 0 ∀j ∈ C (6)
• Moreover, we assume to have an initial solution, x̄ at hand, so-called reference solution, and let
S := {j ∈ B : x̄j = 1} denote the binary support of x̄.
• For a given positive integer parameter k, we define the k-OPT neighborhood N (x̄, k) of x̄ as
the set of the feasible solutions of (P ) satisfying the additional local branching constraint:
X X
∆(x, x̄) := (1 − xj ) + xj ≤ k (7)
j∈S j∈B\S
where the two terms in left-hand side count the number of binary variables flipping their value
(with respect to x̄) either from 1 to 0 or from 0 to 1, respectively.
• For a given positive integer parameter k, we define the k-OPT neighborhood N (x̄, k) of x̄ as
the set of the feasible solutions of (P ) satisfying the additional local branching constraint:
X X
∆(x, x̄) := (1 − xj ) + xj ≤ k (7)
j∈S j∈B\S
where the two terms in left-hand side count the number of binary variables flipping their value
(with respect to x̄) either from 1 to 0 or from 0 to 1, respectively.
• Constraint (7) imposes a maximum Hamming distance of k among the feasible neighbors of x̄.
• For a given positive integer parameter k, we define the k-OPT neighborhood N (x̄, k) of x̄ as
the set of the feasible solutions of (P ) satisfying the additional local branching constraint:
X X
∆(x, x̄) := (1 − xj ) + xj ≤ k (7)
j∈S j∈B\S
where the two terms in left-hand side count the number of binary variables flipping their value
(with respect to x̄) either from 1 to 0 or from 0 to 1, respectively.
• Constraint (7) imposes a maximum Hamming distance of k among the feasible neighbors of x̄.
• When the cardinality of S of any feasible solution of (P ) is a constant, the local branching
constraint assumes the asymmetric form:
0
X
(1 − xj ) ≤ k (= k/2) (8)
j∈S
which is the classical k0-OPT neighborhood for the Symmetric Traveling Salesman Problem.
• The local branching constraint can be used as a branching criterion within an enumerative
scheme for (P ).
Indeed, the following disjunction can be imposed:
• The local branching constraint can be used as a branching criterion within an enumerative
scheme for (P ).
Indeed, the following disjunction can be imposed:
• The local branching constraint can be used as a branching criterion within an enumerative
scheme for (P ).
Indeed, the following disjunction can be imposed:
• The local branching constraint can be used as a branching criterion within an enumerative
scheme for (P ).
Indeed, the following disjunction can be imposed:
• The idea is again simple: the neighborhood N (x̄, k) corresponding to the left branch must be
“sufficiently small” to be optimized within short computing time, but still “large enough” to
likely contain better solutions than x̄.
• The local branching constraint can be used as a branching criterion within an enumerative
scheme for (P ).
Indeed, the following disjunction can be imposed:
• The idea is again simple: the neighborhood N (x̄, k) corresponding to the left branch must be
“sufficiently small” to be optimized within short computing time, but still “large enough” to
likely contain better solutions than x̄.
• Obviously, the choice of k is a problem by itself, but values of k in range [10, 20] proved
effective in most cases.
• The local branching constraint can be used as a branching criterion within an enumerative
scheme for (P ).
Indeed, the following disjunction can be imposed:
• The idea is again simple: the neighborhood N (x̄, k) corresponding to the left branch must be
“sufficiently small” to be optimized within short computing time, but still “large enough” to
likely contain better solutions than x̄.
• Obviously, the choice of k is a problem by itself, but values of k in range [10, 20] proved
effective in most cases.
• The neighborhoods defined by the local branching constraints can be explored by using, as a
black-box, a MIP solver, i.e., a standard tactical branching criterion such as, e.g., branching on
fractional variables.
1
initial solution x̄1
1
initial solution x̄1
%
%
%
1
∆(x, x̄ ) ≤ k %
%
%
%
%
%
2
1
initial solution x̄1
%
%
%
1
∆(x, x̄ ) ≤ k %
%
%
%
%
%
2
TT
T
T T
T
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
TT
T
T T
T
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
∆(x, x̄2 )
TT
≤ k%
%
T
T T %
T %
%
improved solution x̄2 %
%
%
%
3
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
∆(x, x̄2 )
TT
≤ k%
%
T
T T %
T %
%
improved solution x̄2 %
%
%
%
3
TT
T
T T
T
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
≤ k % e ∆(x, x̄2 )
∆(x, x̄2 )
TT
≥k+1
%e
T
T T % e
T % e
% e
2
improved solution x̄ % % e
e
%
%
e
e
3
TT
T
T T
T
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
≤ k % e ∆(x, x̄2 )
∆(x, x̄2 )
TT
≥k+1
%e
T
T T % e
T % e
% e
2
improved solution x̄ % % e
e
%
%
e
e
3
∆(x, x̄3 )
TT
≤ k%
%
T
T T %
T %
%
improved solution x̄3 %
%
%
%
4
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
≤ k % e ∆(x, x̄2 )
∆(x, x̄2 )
TT
≥k+1
%e
T
T T % e
T % e
% e
2
improved solution x̄ % % e
e
%
%
e
e
3
∆(x, x̄3 )
TT
≤ k%
%
T
T T %
T %
%
improved solution x̄3 %
%
%
%
4
TT
T
T T
T
no improved solution
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
≤ k % e ∆(x, x̄2 )
∆(x, x̄2 )
TT
≥k+1
%e
T
T T % e
T % e
% e
2
improved solution x̄ % % e
e
%
%
e
e
3
∆(x, x̄3 )
≤ k % e ∆(x, x̄3 ) ≥
TT %e
T k+1
T T % e
T % e
% e
improved solution x̄3 %% e
e
%
%
e
e
4
5
TT
T
T T
T
no improved solution
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
≤ k % e ∆(x, x̄2 )
∆(x, x̄2 )
TT
≥k+1
%e
T
T T % e
T % e
% e
2
improved solution x̄ % % e
e
%
%
e
e
3
∆(x, x̄3 )
≤ k % e ∆(x, x̄3 ) ≥
TT %e
T k+1
T T % e
T % e
% e
improved solution x̄3 %% e
e
%
%
e
e
4
5
TT TT
T T
T T T T
T T
no improved solution
136700
136650
136600
136550
136500
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
136700
3
3 3
3
ILOG-Cplex - feasibility
3
136650
3 3
3
3
136600
3
3 3
3
136550
3
3
3
136500
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
136700
3
+ 3 3
3
ILOG-Cplex - feasibility
3 ILOG-Cplex - optimality +
136650
3 3
+ 3
3 +
136600
3 +
3 3
3
+
136550
3 +
3
3 +
136500
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
136700
3
+23 3
3
ILOG-Cplex - feasibility
3 ILOG-Cplex - optimality +
Local branching 2
136650
3 3
+ 3
3 +
136600
2 3 +
2
3 3
3
2 2 +
136550
2 3
2 +
2
3
2 3 +
136500
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
• The reference solution x̄1 is obtained by running the MIP solver until the “first” solution is
found.
• The reference solution x̄1 is obtained by running the MIP solver until the “first” solution is
found.
• The local branching method concludes its run after 1,878 CPU seconds, whereas ILOG-Cplex
7.0 in its optimization version converges to optimality within 3,827 CPU seconds (the feasibility
version is unable to prove optimality within a time limit of 6,000 CPU seconds).
• The reference solution x̄1 is obtained by running the MIP solver until the “first” solution is
found.
• The local branching method concludes its run after 1,878 CPU seconds, whereas ILOG-Cplex
7.0 in its optimization version converges to optimality within 3,827 CPU seconds (the feasibility
version is unable to prove optimality within a time limit of 6,000 CPU seconds).
• The reference solution x̄1 is obtained by running the MIP solver until the “first” solution is
found.
• The local branching method concludes its run after 1,878 CPU seconds, whereas ILOG-Cplex
7.0 in its optimization version converges to optimality within 3,827 CPU seconds (the feasibility
version is unable to prove optimality within a time limit of 6,000 CPU seconds).
• In the nodes of the scheme which are explored through tactical branching (T-nodes), a large
number of branch-and-bound nodes are enumerated but the information associated with them
is in some sense “lost” in the following.
• The reference solution x̄1 is obtained by running the MIP solver until the “first” solution is
found.
• The local branching method concludes its run after 1,878 CPU seconds, whereas ILOG-Cplex
7.0 in its optimization version converges to optimality within 3,827 CPU seconds (the feasibility
version is unable to prove optimality within a time limit of 6,000 CPU seconds).
• In the nodes of the scheme which are explored through tactical branching (T-nodes), a large
number of branch-and-bound nodes are enumerated but the information associated with them
is in some sense “lost” in the following.
• The enhanced convergence behavior of the local branching scheme in proving optimality cannot
be guaranteed in all cases: we are currently working to a project devoted to this specific matter.
• Despite the nice behavior shown, the main objective is to devise a general-purpose heuristic
approach combining local search and MIP.
• Despite the nice behavior shown, the main objective is to devise a general-purpose heuristic
approach combining local search and MIP.
• Despite the nice behavior shown, the main objective is to devise a general-purpose heuristic
approach combining local search and MIP.
• Despite the nice behavior shown, the main objective is to devise a general-purpose heuristic
approach combining local search and MIP.
• Diversification:
A further improvement of the heuristic performance of the method can be obtained by
exploiting well-known diversification mechanisms borrowed from local search metaheuristics.
In our scheme, diversification is worth applying whenever the current left-node is proved to
contain no improving solutions. Div
• Case (a):
If the incumbent solution has been improved, we backtrack to the father node and create a new
left-branch node associated with the new incumbent solution, without modifying the value of
parameter k. case (a)
• Case (b):
If the time limit is reached with no improved solution, instead, we reduce the size of the
neighborhood in an attempt to speed-up its exploration.
This is obtained by reducing the right-hand side term by, e.g., dk/2e. case (b)
T L
L
L
T L
L
L
improved solution x̄2
T L
L
L
2
improved solution x̄
3
L
L
L
T L
L
L
T L
L
L
2
improved solution x̄
3
L
L
L
T L
L
L
time limit reached,
improved solution x̄3
T L
L
T L
L
L L
time limit reached,
improved solution x̄3
T L
L
T L
L
L L
time limit reached, improved solution x̄4
improved solution x̄3
T L
L
S
S
aa
aa
L S aa
2 3
aa
improved solution x̄
S
∆(x, x̄ ) ≤ k S aa
aa
S
aa
S a
3
30
4
L L
L
L
L
L ...
T L
L
T L
L
L L
time limit reached, improved solution x̄4
improved solution x̄3
go back
T L
L
L
T L
L
L
improved solution x̄2
T L
L
L
2
improved solution x̄
3
L
L
L
T L
L
L
T L
L
L
2
improved solution x̄
3
L
L
L
T L
L
L
time limit reached,
no improved solution
T L
L
T L
L
L L
time limit reached,
no improved solution
T L
L
T L
L
L L
time limit reached, improved solution x̄3
no improved solution
go back
1 1
initial solution x̄
%e
% e
% e
∆(x, x̄1 ) ≤k % % e 1
e ∆(x, x̄ ) ≥ k+1
% e
% e
%
%
e
e
2
∆(x, x̄2 )
≤ k % e ∆(x, x̄2 )
TT
≥k+1
%e
T
T T % e
T % e
% e
improved solution x̄2 %% e
e
%
%
e
e
3
∆(x, x̄3 )
TT
≤ k%
%
T
T T %
T %
%
improved solution x̄3 %
%
%
%
4
TT
T
T T
T
• Soft diversification:
We first apply a “soft” action consisting in enlarging the current neighborhood by increasing its
size by, e.g., dk/2e.
A new “left-branch” is then explored and in case no improved solution is found even in the
enlarged neighborhood (within the time limit), we apply a stronger action in the spirit of
Variable Neighborhood Search. [Mladenovı́c & Hansen, 1997]
• Soft diversification:
We first apply a “soft” action consisting in enlarging the current neighborhood by increasing its
size by, e.g., dk/2e.
A new “left-branch” is then explored and in case no improved solution is found even in the
enlarged neighborhood (within the time limit), we apply a stronger action in the spirit of
Variable Neighborhood Search. [Mladenovı́c & Hansen, 1997]
• Strong diversification:
We look for a solution (typically worse than the incumbent one) which is not “too far” from
the current reference solution.
We apply tactical branching to the current problem amended by ∆(x, x̄3) ≤ k + 2dk/2e,
but without imposing any upper bound on the optimal solution value.
The exploration is aborted as soon as the first feasible solution is found.
This solution (typically worse than the current best one) is then used as the new reference
solution.
31000
3
30500 3
33
30000
29500 33 33
29000
28500 33 33
3
28000
3 33
27500
33
33
27000 333
33 3333
26500 33 33
3
26000 33 33
33
25500
∗
Gaps for NSR8K refer to 1 hour, 5 hours, and 10 hours of CPU time, respectively.
◦
The negative gaps for instance UMTS indicate an improvement of the best known
The main simple idea discussed opens many interesting fields of application in which the basic
local branching framework can extend its range of use.
The main simple idea discussed opens many interesting fields of application in which the basic
local branching framework can extend its range of use.
The main simple idea discussed opens many interesting fields of application in which the basic
local branching framework can extend its range of use.
The described approach uses the MIP solver as a black-box for performing the tactical
branchings.
This is remarkably simple to implement, but has the disadvantage of wasting part of the
computational effort devoted, e.g., to the exploration the nodes where no improved solution
could be found within the node time limit.
Therefore, a more integrated (and flexible) framework where the two branching rules work in
tight cooperation is expected to produce an enhanced performance.
[Andreello, Fischetti & Lodi, Work in progress]
All the main ingredients of metaheuristics (defining the current solution neighborhood, dealing
with tabu solutions or moves, imposing a proper diversification, etc.) can easily be modeled in
terms of linear cuts to be dynamically inserted and removed from the model.
This naturally leads to the possibility of implementing a full general “new” metaheuristic
algorithm possibly taking into account the problem structure.
Very promising results in this direction. [Fischetti, Polo & Scantamburlo, 2003]
All the main ingredients of metaheuristics (defining the current solution neighborhood, dealing
with tabu solutions or moves, imposing a proper diversification, etc.) can easily be modeled in
terms of linear cuts to be dynamically inserted and removed from the model.
This naturally leads to the possibility of implementing a full general “new” metaheuristic
algorithm possibly taking into account the problem structure.
Very promising results in this direction. [Fischetti, Polo & Scantamburlo, 2003]
All the main ingredients of metaheuristics (defining the current solution neighborhood, dealing
with tabu solutions or moves, imposing a proper diversification, etc.) can easily be modeled in
terms of linear cuts to be dynamically inserted and removed from the model.
This naturally leads to the possibility of implementing a full general “new” metaheuristic
algorithm possibly taking into account the problem structure.
Very promising results in this direction. [Fischetti, Polo & Scantamburlo, 2003]
Despite the overall discussion, there is no need of using local branching constraints within a
general-purpose MIP solvers.
These constraints can be integrated within special-purpose codes, both exacts and heuristics,
(black-boxes) designed for specific problems so as to enhance their heuristic capability.
Obviously, the only requirement is that the code is able to cope with linear inequalities.
Good results in this context. [Hernández-Pérez & Salazar-González, 2003]
Local branching is based on the assumption that B 6= ∅, i.e., there is a set of binary variables,
and moreover, this set is of relevant importance.
According to our computational experience, this is true even in case of MIPs involving general
integer variables, in that the 0-1 variables (which are often associated with big-M terms) are
likely to be largely responsible for the difficulty of the model.
However, in the general case of integer variables xj | lj ≤ xj ≤ uj , the local branching
constraint can be written as:
−
X X X +
∆1(x, x̄) := µj (xj − lj ) + µj (uj − xj ) + µj (xj + xj ) ≤ k
j∈I:x̄j =lj j∈I:x̄j =uj j∈I:lj <x̄j <uj
where weights µj are defined, e.g., as µj = 1/(uj − lj ) ∀ j ∈ I , while the variation terms
−
x+
j and xj require the introduction into the MIP model of additional constraints of the form:
+ − + −
xj = x̄j + xj − xj , xj ≥ 0, xj ≥ 0, ∀j ∈ I : lj < x̄j < uj .
As stated, the local branching framework requires a starting (feasible) reference solution x̄1.
However, for difficult MIPs (such as, e.g., hard set partitioning models) even the definition of
this solution may require an excessive computing time.
In this case, one should consider the possibility of working with infeasible reference solutions by
adding slack variables to (some of) the constraints, while penalizing them in the objective
function.
Preliminary results. [Balas, 2003; Fischetti & Lodi, Work in progress]
As stated, the local branching framework requires a starting (feasible) reference solution x̄1.
However, for difficult MIPs (such as, e.g., hard set partitioning models) even the definition of
this solution may require an excessive computing time.
In this case, one should consider the possibility of working with infeasible reference solutions by
adding slack variables to (some of) the constraints, while penalizing them in the objective
function.
Preliminary results. [Balas, 2003; Fischetti & Lodi, Work in progress]
Moreover, another interesting point is using local branching ideas to devise a method to
converge to an initial feasible solution without using the branch-and-bound framework.
This means using the concept of neighborhood to define a distance between a feasible
continuous solution and an infeasible integer one, and then solve a sequence of LPs by
minimizing this distance.
[Fischetti, Glover & Lodi, Work in progress]