0% found this document useful (0 votes)
0 views

Main

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Main

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

Introduction to Combinatorial Optimization

Dr. Farouq Zitouni

Kasdi Merbah University - Ouargla


Faculty of Information and Communication Technology
Department of Computer Science and Information Technology

October 3, 2023

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 1 / 117


Lecture’s objective

The main objective of studying combinatorial optimization is to discover


effective algorithms and strategies for finding optimal solutions to complex
problems with discrete decision variables and constraints, leading to im-
proved decision-making in various practical applications.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 2 / 117


Outline

1 What is combinatorial optimization?

2 Key concepts in optimization

3 Characteristics of optimization problems

4 Mathematical formulation

5 Examples of combinatorial optimization problems

6 Optimization algorithms

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 3 / 117


Outline

1 What is combinatorial optimization?

2 Key concepts in optimization

3 Characteristics of optimization problems

4 Mathematical formulation

5 Examples of combinatorial optimization problems

6 Optimization algorithms

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 4 / 117


What is combinatorial optimization? (1/4)

Combinatorial optimization is a branch of mathematical optimization that deals with


finding the best possible solution from a finite set of discrete choices. It involves
optimizing a problem where the solution is typically a combination or arrangement
of discrete elements, such as selecting a set of items from a given list, designing a
schedule, or constructing a network.

In combinatorial optimization, the primary objectives are often to maximize certain


benefits or minimize certain costs while satisfying a set of constraints. The key
characteristics of combinatorial optimization problems include:

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 5 / 117


What is combinatorial optimization? (2/4)

Discreteness: The decision variables in these problems take on discrete values


or represent discrete choices, as opposed to continuous optimization problems
where variables can take any real value.
Finite set of choices: There is a finite set of possible solutions or choices to
consider, often with a combinatorial explosion of possibilities as the problem
size increases.
Objective function: These problems involve defining an objective function
that quantifies the goal to be optimized, such as maximizing profit, minimizing
cost, or minimizing time.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 6 / 117


What is combinatorial optimization? (3/4)

Constraints: Constraints are imposed to limit the set of feasible solutions.


These constraints can be linear or nonlinear and may represent resource limi-
tations, logical conditions, or other requirements.
Optimization algorithms: Combinatorial optimization problems require spe-
cialized algorithms and heuristics to efficiently explore the vast solution space
and identify the best solution.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 7 / 117


What is combinatorial optimization? (4/4)

Common examples of combinatorial optimization problems include the Traveling


Salesman Problem, Knapsack Problem, Job Scheduling, Maximum Cut Problem,
and Graph Coloring, among many others. These problems have applications in var-
ious fields, including logistics, transportation, manufacturing, telecommunications,
finance, and computer science, where efficient decision-making and resource allo-
cation are crucial.

Combinatorial optimization is concerned with finding the most favorable combi-


nation or arrangement of discrete elements to achieve the desired objective while
adhering to specified constraints. It plays a fundamental role in addressing complex
decision-making and resource allocation challenges in diverse real-world scenarios.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 8 / 117


Outline

1 What is combinatorial optimization?

2 Key concepts in optimization

3 Characteristics of optimization problems

4 Mathematical formulation

5 Examples of combinatorial optimization problems

6 Optimization algorithms

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 9 / 117


Key concepts in optimization (1/13)

Optimization problems
An optimization problem, in simple words, is a puzzle where you are trying to
find the best possible answer. Imagine you have a limited amount of money to
buy groceries, and you want to get the most food for your budget. That is an
optimization problem. You are looking for the ideal way to spend your money to
maximize what you can bring home. Fundamentally, optimization problems are
like real-life puzzles where you want to make choices to achieve the best outcome,
whether it is saving money, finding the quickest route, or making the most of your
resources. It is about making smart decisions to get the most value or benefit out
of a situation.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 10 / 117


Key concepts in optimization (2/13)

Objective functions
An objective function is a mathematical expression that serves as a way to measure
how good or bad a particular solution or choice is within a given problem. Think
of it as a tool that helps us assign a numerical value to our decisions, where lower
values typically represent better choices. This function guides us in finding the most
favorable outcome by systematically evaluating and comparing different options to
determine which one is the best fit for our specific problem. In essence, the objective
function is like a compass that helps us navigate through a sea of possibilities to
reach the optimal destination.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 11 / 117


Key concepts in optimization (3/13)

Decision variables
A decision variable is a specific quantity or factor that we can adjust or choose within
a problem to help us achieve our desired outcome. These variables represent the
choices we make in solving an optimization puzzle. By assigning different values to
these decision variables, we explore various scenarios to determine the best course
of action. Basically, decision variables are the levers or knobs we can tweak to
find the most effective solution, much like adjusting the settings on a machine to
produce the desired result.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 12 / 117


Key concepts in optimization (4/13)

Feasible solutions and constraints


Feasible solutions refer to the set of choices or answers that satisfy all the rules and
requirements of the problem. These solutions are like the allowed outcomes; they
are the ones that do not break any of the established rules.

Constraints, on the other hand, are these rules and requirements themselves. They
are the conditions or limits that you must follow when you are trying to find the
best solution. Constraints serve as the boundaries that keep your choices within
the realm of what is acceptable or practical.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 13 / 117


Key concepts in optimization (5/13)

Types of optimization problems (1/4)


Here is a non-exhaustive list of types of optimization problems in layman’s terms,
presented formally:
Linear optimization problem: This type involves finding the best outcome
in situations where the relationship between variables is linear. It is like figur-
ing out the most efficient way to allocate resources in a factory to maximize
production while minimizing costs.
Integer optimization problem: Integer optimization is similar to linear op-
timization, but it deals with situations where the decision variables must be
whole numbers (integers). It is like finding the optimal way to select items
from a catalog while considering that you cannot purchase a fraction of an
item.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 14 / 117


Key concepts in optimization (6/13)

Types of optimization problems (2/4)


Nonlinear optimization problem: These problems are more complex as they
deal with situations where the relationship between variables is not linear. It is
like optimizing a recipe where the quantity of each ingredient affects the taste,
and these effects are not always straightforward.
Combinatorial optimization problem: This type involves finding the best
solution from a finite set of possible choices. It is like planning a road trip and
figuring out the best route to visit multiple cities while minimizing travel time
or cost.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 15 / 117


Key concepts in optimization (7/13)

Types of optimization problems (3/4)


Convex optimization problem: Convex optimization deals with problems
where the objective function has a specific, well-behaved shape. It is like
finding the highest point on a hill without valleys; you can reach the top
without getting stuck in a lower spot.
Dynamic optimization problem: These problems consider decisions made
over time, taking into account how choices at one point affect future outcomes.
It is like managing your finances over several years, making investments and
savings decisions today with an eye on future goals.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 16 / 117


Key concepts in optimization (8/13)

Types of optimization problems (4/4)


Stochastic optimization problem: Stochastic optimization deals with uncer-
tain or random factors. It is like making decisions in a game with dice rolls;
you want to maximize your chances of winning over many rounds.
Multi-objective optimization problem: In these problems, there are multiple
conflicting objectives that you need to balance. It is like trying to choose a car
that maximizes fuel efficiency, safety, and affordability simultaneously.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 17 / 117


Key concepts in optimization (9/13)

Linear vs. nonlinear (1/3)


The primary difference between linear and nonlinear optimization problems lies in
the nature of the objective function and the constraints. Here is a breakdown of
the key distinctions:

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 18 / 117


Key concepts in optimization (10/13)

Linear vs. nonlinear - linear optimization problems (2/3)


Objective function: In linear optimization, the objective function is linear,
meaning that it involves variables raised to the power of 1 and is composed
of simple addition and subtraction. It has the form c1 x1 + c2 x2 + . . . + cn xn ,
where x1 , x2 , . . . , xn are the decision variables, and c1 , c2 , . . . , cn are constants.
Constraints: The constraints in linear optimization are also linear, consist-
ing of linear equations and inequalities. These constraints are represented as
a11 x1 + a12 x2 + . . . + a1n xn ≤ b1 , a21 x1 + a22 x2 + . . . + a2n xn ≥ b2 , and so on.
Solutions: Linear optimization problems have well-defined, structured solu-
tions. The feasible region (the set of all possible solutions) forms a convex
polytope, and the optimal solution (maximum or minimum) is typically found
at one of the corner points of this polytope.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 19 / 117


Key concepts in optimization (11/13)

Linear vs. nonlinear - nonlinear optimization problem (3/3)


Objective function: Nonlinear optimization involves objective functions that
can contain variables raised to powers other than 1 (e.g., x 2 , e x ), products
of variables (e.g., x1 x2 ), and other nonlinear mathematical operations (e.g.,
trigonometric functions). The objective function can take more complex, non-
linear forms.
Constraints: Nonlinear optimization problems can have nonlinear constraints,
where the constraints involve nonlinear equations or inequalities. These con-
straints can make the feasible region more complex and irregular.
Solutions: Nonlinear optimization problems are generally more challenging to
solve than linear ones. The feasible region may not have a simple geometric
shape, and the optimal solution might not be readily apparent. Solving non-
linear optimization problems often requires specialized numerical techniques,
such as gradient-based methods or genetic algorithms.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 20 / 117


Key concepts in optimization (12/13)

Continuous vs. discrete


The primary difference between discrete and continuous optimization problems is
the nature of the decision variables. Discrete optimization deals with variables that
typically take on distinct values from a finite set, often integers, while continuous
optimization involves variables that can take any real value within a specified range.
The objective function and constraints in continuous/discrete optimization can be
linear or nonlinear, but the key distinction is that they involve continuous/discrete
decision variables. In other words, the relationships between the variables can be
complex, but the variables themselves take on continuous/discrete values. The
choice between these two types of optimization depends on the problem’s charac-
teristics and requirements.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 21 / 117


Key concepts in optimization (13/13)

Solution space and feasible region


The solution space in optimization is like a playground where you explore all the
possible answers to a problem. It is the range or set of values that the decision
variables can take, considering no constraints or limitations. Think of it as a vast
field of options where you can test different combinations to find the best outcome.

The feasible region is a subset of the solution space, and it is like the part of
the playground where you are allowed to play by the rules. It includes all the
combinations of decision variables that satisfy the constraints of the problem. In
simple terms, it is where you look for the answers that not only work but also follow
the established guidelines or limitations.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 22 / 117


Outline

1 What is combinatorial optimization?

2 Key concepts in optimization

3 Characteristics of optimization problems

4 Mathematical formulation

5 Examples of combinatorial optimization problems

6 Optimization algorithms

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 23 / 117


Characteristics of optimization problems (1/11)

N P-hardness and intractability (1/3)


N P-hardness is a classification of optimization problems in computational com-
plexity theory. A problem is considered N P-hard if it is at least as hard as the
hardest problems in N P (Nondeterministic Polynomial time). In other words, an
N P-hard problem is one for which no known efficient algorithm exists to find the
optimal solution in polynomial time.

In the context of optimization problems, when a problem is labeled as N P-hard,


it means that as the size of the problem (the number of decision variables or con-
straints) increases, the time required to find the optimal solution grows exponen-
tially. Solving N P-hard problems often involves exhaustive search or trying out a
vast number of possibilities, which makes them computationally challenging and
impractical for large instances.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 24 / 117


Characteristics of optimization problems (2/11)

N P-hardness and intractability (2/3)


Intractability refers to the difficulty or impracticality of solving certain optimization
problems within a reasonable amount of time. Intractable problems are typically
associated with N P-hardness, but they can also extend to other problems that are
computationally expensive and lack efficient algorithms for finding optimal solutions.

When an optimization problem is considered intractable, it means that as the prob-


lem size increases, the computational resources required to solve it (such as time
and memory) grow exponentially. In practical terms, this often means that for
large instances of the problem, finding the best solution becomes extremely time-
consuming or even impossible within a reasonable time frame.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 25 / 117


Characteristics of optimization problems (3/11)

N P-hardness and intractability (3/3)


In summary, N P-hardness and intractability in optimization problems are related
concepts. N P-hard problems are those that are as hard as the hardest problems
in N P, and intractable problems are those for which finding optimal solutions is
computationally difficult, especially as the problem size increases. These concepts
highlight the limitations of efficiently solving certain optimization problems and the
need for heuristic or approximation algorithms in practice.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 26 / 117


Characteristics of optimization problems (4/11)

Search space explosion (1/2)


Search space explosion, in the context of optimization and computational complex-
ity, refers to the rapid growth in the number of possible solutions or configurations
that must be explored or considered as the size or complexity of a problem increases.
It occurs when the number of potential solutions increases exponentially with the
problem’s parameters or variables.

In optimization problems, the search space represents all possible combinations of


decision variables and constraints that need to be evaluated to find the optimal
solution. As the problem becomes larger or more complex, the search space can
grow significantly, making it increasingly challenging to explore all possible solutions
in a reasonable amount of time.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 27 / 117


Characteristics of optimization problems (5/11)

Search space explosion (2/2)


Search space explosion is a critical issue for solving complex problems, especially
for problems that are N P-hard or intractable. In these cases, the sheer size of the
search space makes it impractical or impossible to examine every possible solution
exhaustively. To address search space explosion, optimization algorithms often em-
ploy various techniques, such as heuristics, pruning strategies, and approximations,
to narrow down the search and focus on promising regions of the space while avoid-
ing fruitless exploration of vast and unproductive solution paths.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 28 / 117


Characteristics of optimization problems (6/11)

Heuristics
In the optimization context, a heuristic is like a smart and practical rule of thumb
or a shortcut that helps you find good solutions to problems without having to check
every possible option. It is a way of making educated guesses and decisions based
on your knowledge and experience, even if you do not have all the information.

Think of it as using your intuition and past experience to quickly solve a problem or
make a choice, rather than spending a lot of time trying every possible approach.
Heuristics are handy when you want a reasonably good solution without getting
bogged down in exhaustive and time-consuming calculations or searches.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 29 / 117


Characteristics of optimization problems (7/11)

Importance of heuristic methods (1/5)


Heuristic methods play a crucial role in optimization for several important reasons:

Handling intractable problems: Many optimization problems, especially


those classified as N P-hard, become computationally intractable for large in-
stances. Heuristic methods provide practical ways to find reasonably good
solutions within a reasonable amount of time, even when exact algorithms
would be impractical.
Efficiency: Heuristics are designed to be computationally efficient. They
trade-off optimality for speed, making them suitable for solving large-scale,
real-world problems where finding the absolute best solution may not be feasi-
ble.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 30 / 117


Characteristics of optimization problems (8/11)

Importance of heuristic methods (2/5)


Exploring complex search spaces: In complex optimization problems, the
search space can be vast and intricate. Heuristics use smart strategies to navi-
gate this space efficiently, focusing on promising regions and avoiding fruitless
exploration.
Problem-specific adaptation: Heuristics can be tailored to the specific char-
acteristics of a problem. This adaptability allows them to exploit problem-
specific insights, making them effective in domains where generic algorithms
may struggle.
Approximation of optimal solutions: While heuristics do not guarantee opti-
mality, they often produce solutions that are very close to optimal. In practice,
near-optimal solutions are often sufficient for many real-world applications.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 31 / 117


Characteristics of optimization problems (9/11)

Importance of heuristic methods (3/5)


Robustness: Heuristics are typically robust to variations and uncertainties in
problem instances. They can handle noisy or incomplete data and still provide
acceptable solutions.
Scalability: Heuristic methods can scale to handle problems of various sizes,
from small to large instances, making them versatile for a wide range of appli-
cations.
Problem diversity: Heuristics are applicable to a broad spectrum of optimiza-
tion problems, from scheduling and routing to packing and assignment, making
them a valuable tool in many fields.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 32 / 117


Characteristics of optimization problems (10/11)

Importance of heuristic methods (4/5)


Problem solving in real-time: Heuristics are often used in real-time decision-
making scenarios where quick, satisfactory solutions are needed. Examples
include logistics and transportation planning.
Algorithm development: Heuristics serve as a foundation for developing more
advanced optimization techniques. They provide insights and strategies that
can be integrated into more sophisticated algorithms.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 33 / 117


Characteristics of optimization problems (11/11)

Importance of heuristic methods (5/5)


In summary, heuristic methods are essential tools in optimization because they pro-
vide practical, efficient, and scalable solutions for solving complex problems where
exact solutions are elusive or impractical. They bridge the gap between theoretical
optimality and real-world applicability, allowing researchers to tackle a wide range
of optimization challenges effectively.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 34 / 117


Outline

1 What is combinatorial optimization?

2 Key concepts in optimization

3 Characteristics of optimization problems

4 Mathematical formulation

5 Examples of combinatorial optimization problems

6 Optimization algorithms

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 35 / 117


Mathematical formulation (1/9)

A combinatorial optimization problem can be mathematically formulated using the


following standard notation: Given a finite set of elements or objects, typically
denoted as S, and a set of feasible solutions or subsets of S, denoted as F, each
with an associated cost or objective function value, denoted as f (X ) for X ∈ F,
the goal is to find a solution X ∗ ∈ F that optimizes (i.e., minimizes or maximizes)
the objective function f (X ) while satisfying a set of constraints, denoted as C.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 36 / 117


Mathematical formulation (2/9)

The previous formulation provides a general framework for combinatorial optimiza-


tion problems. Depending on the specific problem, you would define the set S, the
feasible solutions F, the objective function f (X ), and the constraints C accordingly.
The choice of decision variables, objective function, and constraints would depend
on the problem you are addressing. Here are a few examples of how this formulation
can be applied to specific combinatorial optimization problems:

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 37 / 117


Mathematical formulation (3/9)
Traveling salesman problem:
S consists of a set of cities.
F includes all possible permutations of visiting the cities.
f (X ) is the total distance or cost of the tour when visiting the cities in the
order specified by permutation X .

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 38 / 117


Mathematical formulation (4/9)
Knapsack problem:
S represents a set of items, each with a weight and a value.
F includes all possible subsets of items.
f (X ) is the total value of the items in subset X , subject to the constraint that
the total weight does not exceed a certain limit.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 39 / 117


Mathematical formulation (5/9)
Graph coloring problem:
S represents a set of vertices in a graph.
F includes all possible vertex colorings of the graph.
f (X ) is a measure of the quality of the coloring, typically the number of colors
used, with the objective to minimize this number while ensuring that adjacent
vertices have different colors.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 40 / 117


Mathematical formulation (6/9)

In each of these examples, the problem-specific details determine how S, F, and


f (X ) are defined, as well as any additional constraints that may be imposed.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 41 / 117


Mathematical formulation (7/9)

The canonical formulation of an optimization problem typically refers to a stan-


dardized or generic way of expressing optimization problems, making it easier to
recognize and work with various types of optimization problems. It provides a com-
mon structure that encompasses both minimization and maximization problems.
The canonical form of an optimization problem is as follows: Given a real-valued
objective function f (X ) and a set of constraints gi (X ) ≤ 0 and hj (X ) = 0, where
X is a vector of decision variables, the goal is to find the vector X ∗ that minimizes
the objective function while satisfying the constraints:

Minimize f (X )

gi (X ) ≤ 0 , i ∈ {1, . . . , m}
Subject to
hj (X ) = 0 , j ∈ {1, . . . , n}

With X ∈ D1 × . . . × Dd

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 42 / 117


Mathematical formulation (8/9)

In this formulation:
1 X = [x1 , . . . , xd ] represents the vector of decision variables to be optimized.
2 f (X ) is the objective function to be minimized.
3 gi (X ) ≤ 0 are inequality constraints, where i ranges from 1 to m. These
constraints impose limitations on the feasible region.
4 hj (X ) = 0 are equality constraints, where j ranges from 1 to n. These con-
straints represent relationships that must hold in the solution.
5 D1 × . . . × Dd are the domains to which the decision variables belong.
NB: For a maximization problem, you would change Minimize to Maximize while
keeping the same constraints.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 43 / 117


Mathematical formulation (9/9)

This canonical form is widely used in mathematical optimization and can be adapted
to represent a wide range of optimization problems, including linear programming,
nonlinear programming, integer programming, and more. By expressing optimiza-
tion problems in this standardized format, it becomes easier to apply various opti-
mization algorithms and techniques.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 44 / 117


Outline

1 What is combinatorial optimization?

2 Key concepts in optimization

3 Characteristics of optimization problems

4 Mathematical formulation

5 Examples of combinatorial optimization problems

6 Optimization algorithms

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 45 / 117


Examples of combinatorial optimization problems (1/19)

Traveling salesman problem: The Traveling Salesman Problem (TSP) is a classic


combinatorial optimization problem that involves finding the shortest possible route
that visits a set of cities exactly once and returns to the starting city. In other
words, it seeks to determine the most efficient way for a traveling salesman to visit
all cities on a tour and return to the initial city while minimizing the total travel
distance or cost. We give the mathematical formulation of the Traveling Salesman
Problem.
Sets and Parameters:
N: Set of cities to be visited, where N = {1, 2, . . . , n}.
dij : Distance or cost between city i and city j, ∀i, j ∈ N.
Decision Variables:
xij : Binary decision variable, where xij = 1 if the salesman travels directly from
city i to city j, and xij = 0 otherwise, ∀i, j ∈ N.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 46 / 117


Examples of combinatorial optimization problems (2/19)
Objective Function: Minimize the total travel distance or cost, which is the sum
of distances for all connected city pairs:
X X
Minimize xij dij
i∈N j∈N,i̸=j

Constraints:
Ensure that each city is visited exactly once:
X
xij = 1, ∀i ∈ N
j∈N,i̸=j

Ensure that each city is left exactly once:


X
xij = 1, ∀j ∈ N
i∈N,i̸=j

Sub-tour elimination constraints (to prevent the formation of sub-tours in the


tour): X X
xij ≤ |S| − 1, ∀S ⊂ N : 2 ≤ |S| ≤ |N| − 1
i∈S j∈S,i̸=j

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 47 / 117


Examples of combinatorial optimization problems (3/19)

Applications: The Traveling Salesman Problem has numerous practical applica-


tions in various fields, including:
Logistics and route planning: Optimizing delivery routes for trucks, couriers,
or mail carriers to minimize travel time or distance.
Manufacturing and production: Sequencing the order of operations in a
manufacturing process to reduce material handling costs.
Circuit board manufacturing: Determining the optimal sequence for drilling
holes on a circuit board to minimize production time.
Vehicle routing: Planning efficient routes for service vehicles, such as garbage
trucks or public transportation.
Network design: Designing telecommunication networks to connect a set of
locations with minimal cable length.
Tourism: Planning tourist itineraries to visit multiple attractions with minimal
travel time.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 48 / 117


Examples of combinatorial optimization problems (4/19)

Knapsack problem: The Knapsack Problem is a classic combinatorial optimization


problem that involves selecting a subset of items from a given set, each with a
specific weight and value, in such a way that the total weight does not exceed a
predefined capacity while maximizing the total value of the selected items. We give
the mathematical formulation of the Knapsack Problem.
Sets and Parameters:
n: The number of items, where n is a positive integer.
W : The maximum weight capacity of the knapsack.
wi : The weight of item i, for i = 1, 2, . . . , n.
vi : The value of item i, for i = 1, 2, . . . , n.
Decision Variables:
xi : Binary decision variable, where xi = 1 if item i is selected to be included
in the knapsack, and xi = 0 otherwise, for i = 1, 2, . . . , n.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 49 / 117


Examples of combinatorial optimization problems (5/19)

Objective Function: Maximize the total value of the selected items:


n
X
Maximize vi xi
i=1

Constraints: Ensure that the total weight of selected items does not exceed the
capacity of the knapsack:
Xn
w i xi ≤ W
i=1

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 50 / 117


Examples of combinatorial optimization problems (6/19)

Applications: The Knapsack Problem has many practical applications in various


domains, including:
Resource allocation: Optimizing the allocation of limited resources, such as
budget or space, to maximize the value or benefit.
Project scheduling: Scheduling tasks or projects with limited resources (e.g.,
time or manpower) to maximize the total project value.
Manufacturing: Deciding which products or components to manufacture and
store in inventory to maximize profit while adhering to production capacity
constraints.
Data compression: Selecting a subset of data elements to maximize the
compressed data’s value while staying within a storage or transmission capacity.
Cutting stock problem: Cutting raw materials into pieces of different sizes
to meet customer demand while minimizing waste.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 51 / 117


Examples of combinatorial optimization problems (7/19)

Bin-packing problem: The Bin Packing Problem (BPP) is a well-known combina-


torial optimization problem that involves packing a set of items, each with a given
size or volume, into a minimum number of bins, each with a fixed capacity. The
goal is to minimize the number of bins used while ensuring that all items are packed
without exceeding the bin capacities. We give the mathematical formulation of the
Bin Packing Problem.
Sets and Parameters:
n: The number of items, where n is a positive integer.
B: The fixed capacity of each bin or container.
si : The size or volume of item i, for i = 1, 2, . . . , n.
Decision Variables:
xij : Binary decision variable, where xij = 1 if item i is placed in bin j, and
xij = 0 otherwise, for i = 1, 2, . . . , n and j = 1, 2, . . . , k.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 52 / 117


Examples of combinatorial optimization problems (8/19)

Objective Function: Minimize the number of bins used:

Minimize k

Constraints:
Each item must be placed in exactly one bin:
k
X
xij = 1, ∀i = 1, 2, . . . , n
j=1

The total size of items placed in a bin must not exceed its capacity:
n
X
si xij ≤ B, ∀j = 1, 2, . . . , k
i=1

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 53 / 117


Examples of combinatorial optimization problems (9/19)
Applications: The Bin Packing Problem has a wide range of practical applications
across various domains, including:
Logistics and transportation: Efficiently loading items into trucks or con-
tainers to minimize the number of shipments or transportation costs.
Manufacturing: Scheduling production and allocating materials to minimize
waste or processing time.
Storage and warehousing: Optimizing storage in warehouses or storage fa-
cilities to maximize space utilization.
Resource allocation: Allocating limited resources, such as time slots or pro-
cessing units, to tasks or jobs efficiently.
Cutting stock problem: Cutting rolls of material (e.g., paper, steel) into
smaller pieces to fulfill customer orders while minimizing waste.
Memory management in computing: Allocating memory blocks to processes
or programs to optimize memory usage.
Packing and layout design: Designing layouts for printing or packaging to
minimize material usage and waste.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 54 / 117


Examples of combinatorial optimization problems (10/19)

Vehicle routing problem: The Vehicle Routing Problem (VRP) is a combinatorial


optimization problem that involves determining the most efficient way to deliver
goods or provide services to a set of customers using a fleet of vehicles. The goal
is to minimize the total cost or distance traveled by the vehicles while satisfying
certain constraints, such as capacity limitations and time windows. We give the
mathematical formulation of the classical Vehicle Routing Problem.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 55 / 117


Examples of combinatorial optimization problems (11/19)

Sets and Parameters:


N: Set of customers, where N = {1, 2, . . . , n}.
V : Set of vehicles, where V = {1, 2, . . . , m}.
qi : Demand of customer i, representing the quantity that needs to be delivered.
Q: Maximum capacity of each vehicle.
cij : Cost or distance between customer i and customer j.
ti : Time window for customer i, representing the allowable time for service.
M: A large positive number used in formulations.
Decision Variables:
xijk : Binary decision variable, where xijk = 1 if vehicle k travels directly from
customer i to customer j, and xijk = 0 otherwise, for i, j ∈ N and ∀k ∈ V .
yik : Binary decision variable, where yik = 1 if vehicle k serves customer i, and
yik = 0 otherwise, for i ∈ N and ∀k ∈ V .

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 56 / 117


Examples of combinatorial optimization problems (12/19)
Objective Function: Minimize the total cost or distance traveled by the vehicles:
XX X
Minimize cij xijk
k∈V i∈N j∈N,i̸=j

Constraints:
Each customer must be visited exactly once by exactly one vehicle:
X
yik = 1, ∀i ∈ N
k∈V

Capacity constraints for each vehicle:


X
qi yik ≤ Q, ∀k ∈ V
i∈N

Vehicle departure and arrival time constraints (time windows):


X X
ti ≤ cij xijk ≤ ti + M(1 − yik ), ∀i ∈ N, k ∈ V
j∈N,i̸=j k∈V

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 57 / 117


Examples of combinatorial optimization problems (13/19)
Applications: The Vehicle Routing Problem has numerous practical applications
across various industries, including:
Logistics and distribution: Optimizing delivery routes for trucks, vans, or
drones to minimize transportation costs and delivery times.
Home healthcare: Planning routes for healthcare professionals to visit pa-
tients, considering time windows and capacity constraints.
Waste collection: Efficiently routing garbage or recycling collection vehicles
to minimize fuel consumption and service times.
Postal services: Designing delivery routes for postal carriers to optimize de-
livery efficiency.
School bus routing: Planning bus routes for school transportation to minimize
travel times and ensure student safety.
Emergency services: Dispatching emergency vehicles (ambulances, fire trucks)
to incidents while minimizing response times.
Fleet management: Managing a fleet of vehicles for various purposes, such
as maintenance, repair, or supply replenishment.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 58 / 117


Examples of combinatorial optimization problems (14/19)

Set cover problem: The Set Cover Problem is a classical combinatorial opti-
mization problem that involves selecting a minimum number of sets from a given
collection of sets to cover all elements of a universal set. In other words, it seeks
to find the smallest sub-collection of sets such that every element from a universal
set is included in at least one of the selected sets. We give the mathematical for-
mulation of the Set Cover Problem.
Sets and Parameters:
U: Universal set containing n elements, U = {1, 2, . . . , n}.
Si : A collection of subsets of U for i = 1, 2, . . . , k.
Decision Variables:
xi : Binary decision variable, where xi = 1 if set Si is selected to be part of the
cover, and xi = 0 otherwise.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 59 / 117


Examples of combinatorial optimization problems (15/19)

Objective Function: Minimize the number of selected sets in the cover:


k
X
Minimize xi
i=1

Constraints: Ensure that each element in the universal set is covered by at least
one selected set: X
Minimize xi ≥ 1, ∀j ∈ U
i:j∈Si

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 60 / 117


Examples of combinatorial optimization problems (16/19)

Applications: The Set Cover Problem has a wide range of practical applications
in various fields, including:
Network design: Selecting the minimum number of communication stations
to cover all areas in a network.
Facility Location: Choosing the minimum number of locations for warehouses
or facilities to serve all customers.
Sensor placement: Placing the minimum number of sensors in a region to
monitor events or phenomena.
Advertising campaigns: Selecting the minimum number of advertisements
to reach a target audience.
Document retrieval: Identifying relevant documents in information retrieval
systems.
Airline crew scheduling: Assigning flight crews to routes to cover all flights
with the fewest crew members.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 61 / 117


Examples of combinatorial optimization problems (17/19)

Assignment problem: The Assignment Problem is a classical combinatorial opti-


mization problem that involves finding the optimal assignment of a set of agents or
workers to a set of tasks or jobs in such a way that the total cost or time required
is minimized. In this problem, each agent can be assigned to exactly one task, and
each task can be assigned to exactly one agent. The goal is to determine the most
cost-effective or time-efficient assignment. We give the mathematical formulation
of the Assignment Problem.
Sets and Parameters:
N: Set of agents or workers, where N = {1, 2, . . . , n}.
M: Set of tasks or jobs, where M = {1, 2, . . . , m}.
cij : Cost or time required for agent i to perform task j, ∀i ∈ N, j ∈ M.
Decision Variables:
xij : Binary decision variable, where xij = 1 if agent i is assigned to task j, and
xij = 0 otherwise, ∀i ∈ N, j ∈ M.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 62 / 117


Examples of combinatorial optimization problems (18/19)

Objective Function: Minimize the total cost or time of the assignment:


XX
Minimize cij xij
i∈N j∈M

Constraints:
Each agent is assigned to exactly one task:
X
xij = 1, ∀i ∈ N
j∈M

Each task is assigned to exactly one agent:


X
xij = 1, ∀j ∈ M
i∈N

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 63 / 117


Examples of combinatorial optimization problems (19/19)
Applications: The Assignment Problem has a wide range of practical applications
in various fields, including:
Personnel assignment: Assigning workers to tasks or projects to minimize
labor costs or completion times.
Job scheduling: Scheduling machines or workers to perform jobs to minimize
processing times or makespan.
Data matching: Matching data records from different sources to minimize
data reconciliation efforts.
Facility location: Assigning customers to service facilities to minimize trans-
portation costs or service times.
Medical residency matching: Matching medical students to residency pro-
grams based on preferences and qualifications.
Auction and bidding: Allocating goods or resources to bidders to maximize
revenue or utility.
Sports scheduling: Scheduling games or matches for sports teams or players
to optimize venues and timing.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 64 / 117


Outline

1 What is combinatorial optimization?

2 Key concepts in optimization

3 Characteristics of optimization problems

4 Mathematical formulation

5 Examples of combinatorial optimization problems

6 Optimization algorithms

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 65 / 117


Optimization algorithms (1/51)

Exact vs. Approximate Algorithms (1/6)


In the field of combinatorial optimization, algorithms can be categorized into two
main classes: exact algorithms and approximate algorithms. These two types of
algorithms are used to solve optimization problems, but they differ in their approach
and the guarantees they provide in terms of finding the optimal solution.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 66 / 117


Optimization algorithms (2/51)

Exact vs. Approximate Algorithms - Exact Algorithms (2/6)


Objective: Exact algorithms are designed to find the optimal solution to a
combinatorial optimization problem. They aim to guarantee that the solution
they produce is the best possible, given the problem’s constraints and objective
function.
Method: Exact algorithms explore the entire solution space systematically to
evaluate all possible combinations or permutations of solutions. They may
use techniques like branch and bound, dynamic programming, integer linear
programming, or exhaustive search.
Guarantee: Exact algorithms provide a provable guarantee that the solution
they find is the globally optimal solution. In other words, they can prove that
there is no better solution within the feasible solution space.
Computational complexity: Exact algorithms can be computationally inten-
sive and may become impractical for large-scale or N P-hard problems. They
often have exponential time complexity in the worst case.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 67 / 117


Optimization algorithms (3/51)

Exact vs. Approximate Algorithms - Exact Algorithms (3/6)


Examples of exact algorithms include the branch and bound method for solving
traveling salesman problems and the simplex algorithm for linear programming.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 68 / 117


Optimization algorithms (4/51)

Exact vs. Approximate Algorithms - Approximate Algorithms (4/6)


Objective: Approximate algorithms, also known as heuristics or approximation
algorithms, aim to find a good solution to a combinatorial optimization problem
without the guarantee of optimality. They prioritize efficiency and practicality
over the guarantee of finding the absolute best solution.
Method: Approximate algorithms often employ techniques that quickly gener-
ate solutions or use heuristics to guide their search. These techniques sacrifice
optimality for speed.
Guarantee: Approximate algorithms do not guarantee the optimal solution.
Instead, they provide an approximation to the optimal solution, often with a
known bound on the quality of the approximation. This bound helps assess
how close the approximate solution is to the true optimal solution.
Computational complexity: Approximate algorithms are typically more effi-
cient and suitable for solving large-scale problems, especially when finding the
exact solution is computationally prohibitive. They often have polynomial or
near-polynomial time complexity.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 69 / 117


Optimization algorithms (5/51)

Exact vs. Approximate Algorithms - Approximate Algorithms (5/6)


Examples of approximate algorithms include the greedy algorithm for the knapsack
problem and the nearest neighbor algorithm for the traveling salesman problem.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 70 / 117


Optimization algorithms (6/51)

Exact vs. Approximate Algorithms (6/6)


The choice between using an exact or an approximate algorithm depends on various
factors, including the problem’s size, the available computational resources, and the
required level of solution quality. Exact algorithms are preferred when optimality
is crucial and the problem size is manageable, while approximate algorithms are
favored for large-scale problems where finding an exact solution is infeasible within
a reasonable time frame.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 71 / 117


Optimization algorithms (7/51)

Exact Algorithms - Enumeration (1/4)


Enumeration is a fundamental exact algorithmic technique used in combinatorial
optimization to systematically explore and evaluate all possible solutions within the
feasible solution space. This technique is particularly useful for relatively small
instances of combinatorial problems where it is feasible to enumerate all possible
combinations or permutations. Enumeration guarantees that the optimal solution
will be found because it exhaustively checks every possible solution.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 72 / 117


Optimization algorithms (8/51)

Exact Algorithms - Enumeration (2/4)


Here are the key aspects of enumeration as an exact algorithm:
Exhaustive search: Enumeration evaluates every candidate solution within
the problem’s feasible solution space. It generates all possible combinations or
permutations and calculates the objective function value for each one.
Brute-Force approach: Enumeration is often referred to as a brute-force
approach because it does not use any specific heuristics or optimization strate-
gies to prune the search space. It simply checks every possible solution.
Optimality guarantee: Enumeration provides an optimality guarantee. If the
solution space is finite and the objective function is well-defined, this method
is guaranteed to find the globally optimal solution. This is particularly valuable
for decision problems where finding the best possible solution is critical.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 73 / 117


Optimization algorithms (9/51)

Exact Algorithms - Enumeration (3/4)


Computational complexity: The primary limitation of enumeration is its
computational complexity. As the size of the problem increases, the number
of possible solutions grows exponentially, making enumeration impractical for
large-scale problems. Its time complexity is typically O(2n ) or O(n!), where n
represents the size of the problem.
Application: Enumeration is often used for problems such as the traveling
salesman problem (TSP), where the number of permutations grows rapidly with
the number of cities, but it can be applied to various combinatorial optimization
problems with a finite solution space.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 74 / 117


Optimization algorithms (10/51)

Exact Algorithms - Enumeration (4/4)


Despite its computational limitations for larger instances, enumeration remains an
important technique for smaller-scale combinatorial optimization problems, espe-
cially when optimality is crucial and other exact algorithms (e.g., dynamic program-
ming, integer linear programming) may not be applicable due to problem constraints
or complexity. When using enumeration, it is essential to consider the trade-off be-
tween computational cost and the guarantee of finding the optimal solution.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 75 / 117


Optimization algorithms (11/51)

Exact Algorithms - Dynamic Programming (1/4)


Dynamic programming is a powerful exact algorithmic technique used in combi-
natorial optimization to solve problems by breaking them down into smaller sub-
problems and efficiently solving each subproblem only once, storing the solutions to
subproblems in a table to avoid redundant calculations. It is particularly effective
for problems that exhibit optimal substructure, where the optimal solution to the
overall problem can be constructed from the optimal solutions of its subproblems.
Dynamic programming provides an optimality guarantee, ensuring that the globally
optimal solution is found.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 76 / 117


Optimization algorithms (12/51)

Exact Algorithms - Dynamic Programming (2/4)


Here are the key aspects of dynamic programming as an exact algorithm:
Optimal substructure: Dynamic programming problems are characterized
by having an optimal substructure, meaning that the optimal solution to the
overall problem can be expressed in terms of optimal solutions to smaller sub-
problems. This property allows dynamic programming to efficiently break down
complex problems into manageable parts.
Overlapping subproblems: Dynamic programming is efficient because it
avoids recomputation of subproblems. When solving a problem, it often en-
counters the same subproblem multiple times. Dynamic programming ensures
that each subproblem is solved only once, and its solution is stored for future
use.
Optimality guarantee: Dynamic programming provides an optimality guaran-
tee. If the problem exhibits the properties required for dynamic programming
(optimal substructure and overlapping subproblems), then the solution pro-
duced is guaranteed to be the globally optimal solution.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 77 / 117


Optimization algorithms (13/51)

Exact Algorithms - Dynamic Programming (3/4)


Computational complexity: The time complexity of dynamic programming
depends on the problem and the specific dynamic programming approach used.
In many cases, dynamic programming solutions have polynomial time com-
plexity, making them suitable for moderately sized instances of combinatorial
optimization problems.
Memoization or tabulation: Dynamic programming approaches can be cat-
egorized into two main types: memoization and tabulation.
Memoization: In memoization, a top-down approach is used. The algorithm
starts with the original problem and recursively solves smaller subproblems. The
solutions to subproblems are stored in a data structure (usually a table or a dic-
tionary) to avoid redundant calculations. If a subproblem has already been
solved, its solution is simply retrieved from the data structure instead of recom-
puting it.
Tabulation: In tabulation, a bottom-up approach is used. The algorithm starts
by solving the smallest subproblems and iteratively builds up solutions to larger
subproblems. The solutions are stored in a table, and each entry in the table is
computed only once based on the solutions to smaller subproblems.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 78 / 117


Optimization algorithms (14/51)

Exact Algorithms - Dynamic Programming (4/4)


Dynamic programming is widely used to solve various combinatorial optimization
problems, including the knapsack problem, the traveling salesman problem, shortest
path problems, and many others. It is a versatile technique that has applications
in computer science, operations research, and other fields where optimization is
crucial.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 79 / 117


Optimization algorithms (15/51)

Exact Algorithms - Branch and Bound (1/4)


Branch and Bound is an exact algorithmic technique used in combinatorial op-
timization to systematically explore the solution space of a problem by dividing
it into smaller subproblems and using bounds to eliminate portions of the search
space that cannot contain the optimal solution. It is particularly useful for solving
problems where the search space is large and it is not feasible to examine every
possible solution.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 80 / 117


Optimization algorithms (16/51)
Exact Algorithms - Branch and Bound (2/4)
Here are the key aspects of Branch and Bound as an exact algorithm:
Divide and conquer: The Branch and Bound approach divides the original
problem into a set of subproblems, each of which is smaller and more man-
ageable than the original. It typically uses a tree structure to represent the
division of the solution space, with each node in the tree corresponding to a
subproblem.
Exploration strategy: The algorithm explores the solution space in a system-
atic manner, often using a depth-first or breadth-first traversal of the search
tree. At each node, it calculates a lower bound on the objective function value
(a lower bound on the optimal solution’s quality) and an upper bound (a value
above which no better solution can exist).
Bounding: One of the key principles of Branch and Bound is bounding, which
involves pruning or eliminating subtrees in the search tree based on the com-
puted bounds. Subtrees that are guaranteed to contain suboptimal solutions
(i.e., solutions worse than the current best solution found so far) are pruned,
reducing the search space.
Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 81 / 117
Optimization algorithms (17/51)

Exact Algorithms - Branch and Bound (3/4)


Optimality guarantee: Branch and Bound provides an optimality guarantee.
When used correctly and if the problem’s characteristics allow for it, it will
eventually find the globally optimal solution by systematically exploring the
search space while eliminating suboptimal portions.
Branching: When a node is explored, it is branched into multiple child nodes,
each representing a different decision or choice. This branching strategy de-
pends on the specific problem being solved and aims to efficiently cover the
entire search space.
Complexity: The time complexity of Branch and Bound depends on the prob-
lem and the specific branching and bounding strategies employed. It can be
quite efficient for certain problems but may still have exponential time com-
plexity in the worst case.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 82 / 117


Optimization algorithms (18/51)

Exact Algorithms - Branch and Bound (4/4)


Branch and Bound is a versatile technique that can be applied to a wide range
of combinatorial optimization problems, including the traveling salesman problem,
integer linear programming, and mixed-integer linear programming. It is used when
finding the exact optimal solution is essential and the problem’s size makes other ex-
act methods, like enumeration, impractical. To enhance the performance of Branch
and Bound, various strategies and heuristics can be employed, such as intelligent
branching rules, efficient pruning techniques, and methods for tightening bounds.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 83 / 117


Optimization algorithms (19/51)

Approximation Algorithms - Greedy Algorithms (1/5)


Greedy algorithms are a class of approximation algorithms used in combinatorial
optimization problems. These algorithms make locally optimal choices at each step
in the hope of finding a solution that is close to the global optimum. While greedy
algorithms do not guarantee the best possible solution in all cases, they are often
simple and efficient, making them useful for a wide range of problems.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 84 / 117


Optimization algorithms (20/51)

Approximation Algorithms - Greedy Algorithms (2/5)


Here are the key characteristics and principles of greedy algorithms as approximation
algorithms:
Greedy choice property: At each step of the algorithm, a greedy algorithm
makes the choice that appears to be the best at that particular moment, with-
out considering the consequences of that choice on future steps. This choice
is made based on a specific criterion or heuristic that guides the algorithm.
Locally optimal choices: Greedy algorithms focus on finding locally optimal
solutions. This means that they aim to maximize or minimize a certain criterion
(e.g., profit, cost, weight) at each step without worrying about the overall
global solution.
No backtracking: Once a decision is made in a greedy algorithm, it is never
revisited or undone. Greedy algorithms do not backtrack or reconsider previous
choices. This property often leads to the simplicity and efficiency of these
algorithms.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 85 / 117


Optimization algorithms (21/51)

Approximation Algorithms - Greedy Algorithms (3/5)


Examples of greedy heuristics: There are various greedy heuristics that can
be used in different problems. Some common greedy strategies include:
Greedy choice by maximum value: Select the element with the maximum
value at each step.
Greedy choice by minimum cost: Select the element with the minimum cost
at each step.
Greedy choice by maximum benefit-to-cost ratio: Select the element with
the maximum benefit-to-cost ratio at each step.
Greedy choice by shortest distance: In graph problems, select the edge with
the shortest distance at each step.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 86 / 117


Optimization algorithms (22/51)

Approximation Algorithms - Greedy Algorithms (4/5)


Efficiency: One of the advantages of greedy algorithms is their efficiency.
They often have linear or near-linear time complexity, making them suitable
for solving large-scale instances of problems.
Examples of problems solved with greedy algorithms: Greedy algorithms
are used in a wide range of optimization problems, including the fractional
knapsack problem, Huffman coding for data compression, Prim’s algorithm for
minimum spanning trees, Kruskal’s algorithm for minimum spanning trees, and
Dijkstra’s algorithm for single-source shortest paths, among others.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 87 / 117


Optimization algorithms (23/51)

Approximation Algorithms - Greedy Algorithms (5/5)


It is important to note that while greedy algorithms are efficient and simple, they
may not always produce the optimal solution. The choice of greedy heuristic and
the problem’s characteristics play a crucial role in determining the quality of the
approximation. For some problems, a greedy algorithm can provide very close-to-
optimal solutions, while for others, it may yield significantly suboptimal results.
Therefore, the suitability of a greedy algorithm depends on the specific problem
and the trade-off between solution quality and computational efficiency.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 88 / 117


Optimization algorithms (24/51)

Approximation Algorithms - Local Search (1/5)


Local search is an approximation algorithmic technique used in combinatorial opti-
mization problems where the goal is to find a good solution within a large solution
space. Unlike exact algorithms that aim to find the global optimum, local search
methods focus on iteratively improving a solution by exploring its neighborhood.
Local search algorithms are particularly useful when it is impractical to examine all
possible solutions exhaustively.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 89 / 117


Optimization algorithms (25/51)

Approximation Algorithms - Local Search (2/5)


Here are the key characteristics and principles of local search as an approximation
algorithm:
Iterative improvement: Local search algorithms start with an initial solution,
which may be generated randomly or using a heuristic, and then iteratively
improve this solution by exploring nearby solutions. The goal is to move from
one solution to another in a way that improves the objective function value.
Local optimum: Local search methods do not guarantee finding the global
optimum. Instead, they aim to find a local optimum, which is a solution that
cannot be further improved by making small changes in the neighborhood.
The quality of the local optimum depends on the algorithm and the specific
problem.
Exploration of neighborhood: At each step of the algorithm, a local search
algorithm examines neighboring solutions by making small modifications to
the current solution. These modifications might involve swapping elements,
changing values, or other local changes, depending on the problem.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 90 / 117


Optimization algorithms (26/51)

Approximation Algorithms - Local Search (3/5)


Objective function: Local search algorithms rely on an objective function
that quantifies the quality of a solution. The algorithm’s goal is to maximize
or minimize this objective function, depending on the nature of the optimization
problem.
Termination condition: Local search algorithms terminate when certain con-
ditions are met. Common termination conditions include reaching a predeter-
mined number of iterations, finding a solution that meets a specific quality
threshold, or running out of computational resources.
Stochastic variants: Some local search algorithms incorporate stochastic el-
ements, such as randomization or probabilistic decisions, to explore a broader
solution space and escape local optima more effectively. Examples of such
algorithms include simulated annealing and genetic algorithms.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 91 / 117


Optimization algorithms (27/51)

Approximation Algorithms - Local Search (4/5)


Examples of local search problems: Local search is applied to a wide range
of combinatorial optimization problems, including the traveling salesman prob-
lem (TSP), the knapsack problem, graph coloring, job scheduling, and vehicle
routing, among others.
Trade-off between exploration and exploitation: Local search algorithms
face a trade-off between exploring new solutions in the hope of finding better
ones (exploration) and exploiting the current solution’s neighborhood (exploita-
tion) to improve the current solution. The balance between exploration and
exploitation can affect the algorithm’s performance.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 92 / 117


Optimization algorithms (28/51)

Approximation Algorithms - Local Search (5/5)


Local search algorithms are known for their versatility and ability to find good
solutions in large solution spaces. However, their effectiveness depends on the choice
of neighborhood structure, the quality of the initial solution, and the search strategy
employed. Additionally, the quality of the solution found by a local search algorithm
may vary depending on factors like algorithmic parameters and the randomness
introduced in the search process.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 93 / 117


Optimization algorithms (29/51)

Metaheuristic Methods - Genetic Algorithms (1/5)


Genetic Algorithms (GAs) are a powerful class of metaheuristic optimization algo-
rithms inspired by the process of natural selection and genetics. They are used to
solve optimization and search problems, particularly in situations where the solution
space is large, complex, and lacks a clear mathematical form. Genetic Algorithms
have been applied to a wide range of optimization problems, including function op-
timization, machine learning, scheduling, and evolving artificial intelligence models.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 94 / 117


Optimization algorithms (30/51)

Metaheuristic Methods - Genetic Algorithms (2/5)


Here are the key components and principles of Genetic Algorithms:
Representation of solutions: In Genetic Algorithms, potential solutions to
the optimization problem are encoded as chromosomes or strings of genetic
information. The structure and representation of the chromosomes depend
on the problem at hand and may include binary strings, real-valued vectors,
permutations, or other data structures.
Population: A population consists of a collection of individuals or candidate
solutions. Initially, a random population of chromosomes is generated. The
size of the population is an important parameter that affects the algorithm’s
performance.
Fitness function: A fitness function is used to evaluate the quality of each
individual in the population. It quantifies how well a solution solves the opti-
mization problem. The fitness function guides the evolution process by assign-
ing higher fitness values to better solutions.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 95 / 117


Optimization algorithms (31/51)

Metaheuristic Methods - Genetic Algorithms (3/5)


Selection: The selection process determines which individuals from the cur-
rent population will be used to create the next generation. In a typical GA,
individuals with higher fitness values have a higher chance of being selected.
Various selection mechanisms, such as roulette wheel selection and tournament
selection, can be employed.
Crossover (Recombination): Crossover is the process of combining genetic
information from two or more parent chromosomes to create one or more
offspring. This mimics the idea of genetic recombination in biological repro-
duction. Different crossover operators can be applied, including one-point,
two-point, and uniform crossover.
Mutation: Mutation involves making small random changes to the genetic
information within an individual chromosome. This introduces diversity into
the population and prevents premature convergence to suboptimal solutions.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 96 / 117


Optimization algorithms (32/51)

Metaheuristic Methods - Genetic Algorithms (4/5)


Termination criteria: Genetic Algorithms continue to evolve populations for a
certain number of generations or until a termination criterion is met. Common
termination criteria include a maximum number of generations, a target fitness
level, or a budgeted amount of computational time.
Evolution: The genetic algorithm proceeds through generations, with each
new generation produced by selecting individuals from the current population,
applying crossover and mutation operators, and creating a new population.
Over time, the population evolves toward better solutions.
Elitism: Elitism is an optional strategy that retains the best individuals from
the current population in the next generation, ensuring that the best solution
found so far is not lost.
Parameter tuning: Genetic Algorithms often require parameter tuning, in-
cluding population size, crossover rate, mutation rate, and selection mecha-
nisms. These parameters can significantly impact the algorithm’s performance.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 97 / 117


Optimization algorithms (33/51)

Metaheuristic Methods - Genetic Algorithms (5/5)


Genetic Algorithms are highly versatile and can handle a wide range of optimization
problems. Their strengths lie in their ability to explore large solution spaces, escape
local optima, and find diverse and high-quality solutions. However, GAs do not
guarantee finding the global optimum, and their performance can be influenced
by factors such as parameter settings and the representation of solutions. As a
result, they are often used when other optimization methods are impractical or less
effective.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 98 / 117


Optimization algorithms (34/51)

Metaheuristic Methods - Simulated Annealing (1/5)


Simulated Annealing is a metaheuristic optimization algorithm inspired by the an-
nealing process in metallurgy, where a material is heated and then gradually cooled
to remove defects and optimize its crystalline structure. In the context of optimiza-
tion, Simulated Annealing is used to find approximate solutions to combinatorial
or continuous optimization problems. It is particularly effective at escaping local
optima and exploring large solution spaces.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 99 / 117


Optimization algorithms (35/51)

Metaheuristic Methods - Simulated Annealing (2/5)


Here are the key components and principles of Simulated Annealing:
Objective function: The optimization problem is defined by an objective
function that quantifies the quality of a solution. The goal is to find a solution
that either maximizes or minimizes this objective function, depending on the
nature of the problem.
Initial solution: Simulated Annealing starts with an initial solution, which can
be generated randomly or using some heuristic method. This initial solution
may be of low quality.
Temperature schedule: A critical parameter in Simulated Annealing is the
temperature, which controls the degree of randomness in the search process.
The algorithm starts with a high initial temperature and gradually decreases
it over time according to a temperature schedule. The temperature schedule
determines how the search evolves and affects the trade-off between exploration
and exploitation.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 100 / 117
Optimization algorithms (36/51)

Metaheuristic Methods - Simulated Annealing (3/5)


Neighborhood exploration: At each iteration, the algorithm explores a neigh-
boring solution by making small modifications to the current solution. The
specific neighborhood exploration method depends on the problem but typi-
cally involves perturbing the current solution.
Acceptance criterion: The new solution found in the neighborhood explo-
ration is accepted or rejected based on an acceptance criterion. Simulated
Annealing employs a probabilistic acceptance criterion that allows worse so-
lutions to be accepted with a certain probability, controlled by the current
temperature and the difference in objective function values between the new
and current solutions. This probabilistic nature enables the algorithm to es-
cape local optima.
Cooling schedule: The cooling schedule defines how the temperature de-
creases over time. A common choice is to reduce the temperature exponen-
tially or linearly. The cooling rate is an important parameter that determines
the convergence behavior of the algorithm.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 101 / 117
Optimization algorithms (37/51)

Metaheuristic Methods - Simulated Annealing (4/5)


Termination condition: Simulated Annealing continues its search until a
termination condition is met. This condition can be based on the number
of iterations, reaching a target temperature, or achieving a desired level of
solution quality.
Optimality guarantee: Simulated Annealing does not guarantee finding the
global optimum, but it aims to find good-quality solutions. The algorithm’s
effectiveness depends on the cooling schedule, the neighborhood exploration
strategy, and the initial solution.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 102 / 117
Optimization algorithms (38/51)

Metaheuristic Methods - Simulated Annealing (5/5)


Simulated Annealing has been successfully applied to a wide range of optimization
problems, including the traveling salesman problem, job scheduling, and parameter
tuning in machine learning algorithms. Its ability to explore the solution space and
escape local optima makes it particularly useful in situations where other optimiza-
tion methods may get stuck. One of the advantages of Simulated Annealing is its
adaptability to different problems and its ability to fine-tune the balance between
exploration and exploitation through the temperature schedule. However, it does
require careful parameter tuning, and the quality of the solutions found can be sen-
sitive to the choice of cooling schedule and neighborhood exploration strategy.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 103 / 117
Optimization algorithms (39/51)

Metaheuristic Methods - Ant Colony Optimization (1/5)


Ant Colony Optimization (ACO) is a metaheuristic algorithm inspired by the forag-
ing behavior of ants. It was originally developed to solve combinatorial optimiza-
tion problems, particularly those that involve finding the shortest paths or tours
in graphs, such as the Traveling Salesman Problem (TSP) and the Vehicle Rout-
ing Problem (VRP). ACO has also been extended to solve continuous optimization
problems.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 104 / 117
Optimization algorithms (40/51)

Metaheuristic Methods - Ant Colony Optimization (2/5)


Here are the key components and principles of Ant Colony Optimization:
Colony of artificial ants: In ACO, a population of artificial ants is used to
explore the solution space of the optimization problem. Each artificial ant
represents a potential solution to the problem.
Pheromone trails: Ants deposit a chemical substance called pheromone on
the paths they traverse. Pheromone represents information about the quality
of the solutions. Paths with higher pheromone levels are more attractive to
other ants.
Solution representation: Ants construct solutions by iteratively selecting
components or elements based on both pheromone levels and a heuristic func-
tion. The heuristic function provides additional information about the desir-
ability of selecting particular elements.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 105 / 117
Optimization algorithms (41/51)

Metaheuristic Methods - Ant Colony Optimization (3/5)


Pheromone update: After all ants have constructed solutions, the pheromone
levels on the paths are updated. Paths used by better solutions are reinforced
with higher pheromone levels, while paths used by worse solutions evaporate
over time.
Exploration vs. exploitation: ACO balances exploration (searching for new
solutions) and exploitation (focusing on known good solutions) through the
use of pheromone and the heuristic function. As pheromone levels guide ants
toward promising paths, the algorithm explores various possibilities.
Solution construction: Ants construct solutions in a probabilistic manner.
They consider both pheromone levels and heuristic information to make deci-
sions about which elements or components to include in their solutions. This
stochastic process encourages diversity in the solutions generated.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 106 / 117
Optimization algorithms (42/51)

Metaheuristic Methods - Ant Colony Optimization (4/5)


Local search: ACO can be enhanced with local search techniques that improve
the quality of solutions in the vicinity of the current solutions. These local
search procedures can help refine the solutions found by ants.
Termination condition: ACO continues to iteratively construct solutions and
update pheromone levels until a termination condition is met. Common termi-
nation conditions include reaching a maximum number of iterations, achieving
a target solution quality, or running out of computational resources.
Parameter tuning: ACO involves several parameters, such as the pheromone
evaporation rate, the weight assigned to pheromone vs. heuristic information,
and the number of ants. Proper parameter tuning is essential for the algo-
rithm’s effectiveness.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 107 / 117
Optimization algorithms (43/51)

Metaheuristic Methods - Ant Colony Optimization (5/5)


ACO has demonstrated its effectiveness in solving various combinatorial optimiza-
tion problems, especially those involving routing and scheduling. It is known for its
ability to find high-quality solutions, its adaptability to different problem instances,
and its ability to escape local optima. However, ACO can be computationally ex-
pensive, especially for large-scale problems, and the quality of the solutions found
can depend on parameter settings and problem characteristics.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 108 / 117
Optimization algorithms (44/51)

Trade-offs between Exact and Approximate Approaches (1/8)


Trade-offs between exact and approximate approaches in problem-solving, especially
in the context of optimization, are crucial considerations that depend on the specific
characteristics of the problem, available computational resources, and the desired
solution quality. Here are some of the key trade-offs:

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 109 / 117
Optimization algorithms (45/51)

Trade-offs between Exact and Approximate Approaches (2/8)


Optimality vs. efficiency:
Exact: Exact approaches, such as branch and bound, dynamic programming,
or integer linear programming, aim to find the globally optimal solution with
the guarantee that no better solution exists within the feasible solution space.
However, they often come with a high computational cost, making them im-
practical for large or complex problems.
Approximate: Approximate methods, like greedy algorithms, genetic algo-
rithms, or simulated annealing, prioritize computational efficiency over opti-
mality. They aim to find a good solution quickly but do not guarantee opti-
mality. The trade-off is that they may not find the best solution, but they are
suitable for large-scale problems where exact methods are infeasible.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 110 / 117
Optimization algorithms (46/51)

Trade-offs between Exact and Approximate Approaches (3/8)


Solution quality:
Exact: Exact methods provide a strong guarantee of solution quality - often
the globally optimal solution. This is crucial for critical applications where even
a slight deviation from optimality is unacceptable.
Approximate: Approximate methods provide solutions that are typically near-
optimal but not guaranteed to be the best. The quality of the solution depends
on the algorithm and its parameters. They are suitable when a reasonably good
solution is acceptable, and achieving optimality is too costly.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 111 / 117
Optimization algorithms (47/51)

Trade-offs between Exact and Approximate Approaches (4/8)


Computational resources:
Exact: Exact algorithms can be computationally intensive, especially for N P-
hard problems. They may require significant time and memory resources to
solve large instances.
Approximate: Approximate methods are often more computationally efficient
and can handle larger problem sizes. They are useful when resources are limited,
and a quick solution is needed.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 112 / 117
Optimization algorithms (48/51)

Trade-offs between Exact and Approximate Approaches (5/8)


Problem structure:
Exact: Exact methods are well-suited for problems with specific structures or
mathematical formulations that allow for efficient exact solutions. For example,
linear programming is effective for certain optimization problems.
Approximate: Approximate methods are versatile and can handle a wide range
of problem types, including those with complex, irregular structures where exact
methods may not apply.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 113 / 117
Optimization algorithms (49/51)

Trade-offs between Exact and Approximate Approaches (6/8)


Deterministic vs. stochastic:
Exact: Exact algorithms are deterministic and produce the same solution every
time when applied to the same problem instance.
Approximate: Approximate methods often incorporate stochastic elements
(e.g., randomization) and may produce different solutions in different runs.
This stochasticity can be useful for exploring diverse solutions or escaping
local optima.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 114 / 117
Optimization algorithms (50/51)

Trade-offs between Exact and Approximate Approaches (7/8)


Sensitivity to parameters:
Exact: Exact algorithms are typically less sensitive to algorithmic parameters
because they are designed to guarantee optimality.
Approximate: Approximate methods may require careful parameter tuning
to balance exploration and exploitation or control the quality of the solutions
produced.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 115 / 117
Optimization algorithms (51/51)

Trade-offs between Exact and Approximate Approaches (8/8)


Ultimately, the choice between an exact and an approximate approach depends on
the specific problem, the available computational resources, the required solution
quality, and the trade-offs that the problem solver is willing to make. In practice,
hybrid approaches that combine both exact and approximate methods are sometimes
used to harness the strengths of each approach.

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 116 / 117
Thank your for your attention!
[email protected]

Dr. Farouq Zitouni Advanced Operations Research October 3, 2023 117 / 117

You might also like