0% found this document useful (0 votes)
8 views12 pages

Conception D - Algorithmes English

Uploaded by

mezragadnane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views12 pages

Conception D - Algorithmes English

Uploaded by

mezragadnane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1

Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

Chapter 2 : Algorithm design techniques

I. Techniques for designing algorithms.

Algorithm design techniques refer to the approaches and methodologies used


to develop efficient and optimal solutions to problems in computing,
economics, industry, and more. The design of algorithms plays a crucial role
in software development and in solving complex problems. The development
of an algorithm involves formulating a well-defined and structured sequence
of instructions that allows for the resolution of a specific problem. The
techniques for designing algorithms aim to produce solutions that are both
correct, efficient in terms of execution time and resource usage, and easy to
understand and maintain. Among the most used design methods, we can
mention:
 Brute force: This technique involves exhaustively testing all possibilities
until the solution is found. Although often ineffective for complex
problems, it can be useful for simple and small issues.
 Divide and conquer: This concept involves breaking a problem down into
smaller sub-problems, solving these sub-problems recursively, and then
combining their solutions to obtain the solution to the original problem. The
classic example is the merge sort algorithm.
 Dynamic programming: This technique involves solving a problem by
breaking it down into smaller subproblems and storing the solutions to
these subproblems to avoid recalculating. This significantly reduces the
number of repeated calculations. The algorithm for calculating the
Fibonacci sequence is a commonly used example to illustrate this
technique.
 Greedy algorithms: Unlike dynamic algorithms, greedy algorithms make
locally optimal choices at each step in the hope that these choices will lead
to a globally optimal solution. The change-making problem and the
fractional knapsack problem are examples of this.
 Research and exploration: This technique is used to explore a solution
space in search of an optimal solution. Depth-first search and breadth-first
search algorithms are classic examples of this approach.
 Recherche binaire : Lorsqu'une liste est triée, la recherche binaire est une
technique efficace pour trouver un élément particulier dans la liste en
réduisant à chaque étape l'intervalle de recherche de moitié.
 Backtracking Algorithm: It is a technique that involves trying all
possibilities, backtracking as soon as a possibility turns out to be a dead
end. Sudoku problems and traveling salesman problems can be solved using
this technique.
 Linear and integer programming: These techniques are used to solve
optimization problems under linear constraints. Linear programming solves
continuous problems, while integer programming addresses problems
where the variables must take integer values.
 Heuristic and metaheuristic methods: These techniques are used to solve
difficult optimization problems for which there is no efficient exact

1
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

algorithm. Genetic algorithms, simulated annealing algorithms, and ant


algorithms are examples of metaheuristics.
 Structural decomposition: Analyze the structure and properties of the
problem to break it down into simpler sub-structures.
 Simulate the problem: Imitating the behavior of the problem through a
program to find a solution.

These algorithm design techniques are powerful tools for solving different
types of problems, but it is important to choose the most suitable technique
based on the specific characteristics and constraints of the problem you are
trying to solve.

II. How to choose the best algorithm design technique for a given
problem?
Sometimes, it is very difficult to choose the right solving algorithm for various
reasons. Here are some guidelines for choosing the best algorithm design
technique for a given problem:

 Analyze the structure and properties of the problem: is it divisible


into sub-parts? Are there recurring substructures?
 Identify if certain subproblems repeat: dynamic programming helps
avoid repeated calculations.
 Does the problem involve many combinations to explore? Then
backtracking is suitable. Can we find an optimal solution by making
locally optimal choices? The greedy approach has a chance of
succeeding.
 Does the problem have an obvious recursive structure?: recursion is
then natural.
 Can the problem be expressed in the form of a mathematical
model?: mathematical tools can directly solve the problem.
Is the space of solutions gigantic? Brute force is not realistic; it is
necessary to break it down.
 Is the allowed calculation time short? favor the most effective
techniques even if they are less than optimal.
 Do the problem's data suitable for division into sub-parts?: the divide
and conquer approach is suitable.

It is about identifying the structure of the problem and choosing the most
suitable algorithm design technique. There can be several possible good
answers.

1. Greedy algorithms

Greedy algorithms, also known as voracious algorithms, are a category of design


algorithms that solve problems by making locally optimal choices at each step, hoping
that these choices will lead to a globally optimal solution. In other words, at each step,
the algorithm makes the best possible decision without worrying about the long-term
consequences. Although greedy algorithms do not always guarantee the optimal

2
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

solution, they are often quick and simple to implement, making them useful for many
problems. The generic algorithm for constructing a greedy algorithm is as follows:

S←∅
function Greedy( C : Set) : set

while ¬Solution(S) ∧ C ≠ ∅ do
x ← item in C which max( Select(X))

if feasible( S ∪ {x}) then


C ← C – {x}

S ← S ∪ {x}
if Solution(S) return S
else Ruturn «No solution !»

Where should the following components be present?


a) A set of available candidates: C
b) A set of candidates already selected: S
c) A solution test: Solution(.)
d) A feasibility test: Feasible( .)
e) A selection function: Select(.)
f) An objective function (function to optimize)

1.1 Characteristics of Greedy Algorithms:

 Local Choice Principle: Greedy algorithms make decisions based only on the
information available locally at each step. They do not consider the long-term
consequences of their choices.
 Locally Optimal Solutions: At each step, a greedy algorithm selects the best
local option, meaning the one that appears to be the best among the
immediately available choices. This leads to locally optimal solutions.
 Incremental Construction Strategy: Solutions are built progressively, step
by step, by adding or selecting elements incrementally. Each step is guided by
the pursuit of the best local choice.
 No Backtracking: Greedy algorithms generally do not backtrack to reevaluate
or change a previously made decision. Once a choice is made, it is final.

1.2 Properties of Problems Solvable by a Greedy Algorithm

 The problem must be an optimization problem.


 Optimal Substructure: The problem can be broken down into a series of
local choices.
 Optimal Substructure:The problem has optimal substructure, meaning that
an optimal solution to the problem contains optimal solutions to its
subproblems. This property allows the algorithm to build the final solution
incrementally.
 Local Optimal Choice Sometimes Leads to Global Optimal Solution:
Making the best local choice can sometimes lead to an overall optimal
solution.

3
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

 Local Decisions Are Independent: The local decisions do not affect each
other.

1.3 The Main Advantages of Greedy Algorithms:

 Simplicity of Design: Greedy algorithms are often simpler to design than


other techniques because they directly construct the solution without relying
on complex calculations.
 Efficiency: Many greedy algorithms have low algorithmic complexity
(linear, logarithmic, etc.) and Low Space Complexity (can scale well with
large datasets).
 Result Guarantee: If the problem satisfies the properties of local optimality, a
greedy algorithm is guaranteed to provide a feasible solution, even if it may
not always be the optimal one.
 Simple Implementation: Their incremental design makes them easier to code
compared to divide-and-conquer or brute force approaches.

1.4 Limitations of Greedy Algorithms


Here are some potential drawbacks of greedy algorithms:
 Non-optimality of the Solution: A greedy algorithm does not guarantee
finding the globally optimal solution to the problem, only a feasible one.
 Sensitivity to Initial Choices: Greedy algorithms are sensitive to initial
choices or the order in which decisions are made. A different order of initial
choices may lead to a different, possibly suboptimal, solution.
 Problems with Local Independence: If local choices are not completely
independent, the greedy solution can be far from optimal.
 Inability to Revisit Decisions: Unlike other techniques like backtracking, a
greedy algorithm cannot go back and revise its previous choices to improve
the solution.
 Difficulty in Proving Optimality: It is often difficult to formally prove that a
greedy algorithm will give the optimal solution for a given problem.
 Inability to Handle Complex Constraints: Problems with intricate
constraints or interdependencies between choices are often difficult to solve
with greedy algorithms.
 Increased Complexity for Some Problems: Greediness is not always the
most efficient technique in terms of algorithmic complexity.

In summary, the main disadvantages are the lack of guaranteed optimality and the
inability to reconsider local decisions. However, for many real-world problems, the
simplicity-to-efficiency ratio of greedy algorithms is very attractive compared to more
complex techniques.

1.5 Variants and Possible Improvements

 Greedy Algorithms with Lookahead: These algorithms consider information


in advance about the potential consequences of each decision. This approach

4
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

tries to anticipate future outcomes of current choices, which can lead to better-
quality solutions.
 Randomized Greedy Algorithm: This variant introduces an element of
randomness into the greedy algorithm. At each step, multiple choices are
generated randomly, and the best among them is selected. This can help avoid
being trapped in a local optimum.
 Two-Phase Greedy Algorithms: These algorithms are a method for solving
combinatorial optimization problems, combining two distinct steps: a
construction phase and an improvement phase. This approach aims to obtain
an initial solution through a greedy strategy, then further improve it using a
local optimization method or a metaheuristic.
 Greedy Algorithms with Machine Learning: Use machine learning
techniques. This can learn from past problem instances to make better
decisions.
 Hybrid Greedy Algorithms: Combine greedy approaches with other
algorithmic paradigms. The advantage is to leverage strengths of multiple
approaches. Example: Greedy randomized adaptive search procedure
(GRASP)

III. Example of Greedy Algorithms:

1. Coin Change Problem:


Suppose you need to give change for a sum of money using the fewest possible coins
and bills.

We assume that customers only give you amounts in whole euros (no cents for
simplicity);
The available coin and bill values are: 1, 2, 5, 10, 20, 50, 100, 200, and 500. We
assume you have as many of each coin or bill as needed. For simplicity, we will refer
to both coins and bills as "coins."

A greedy algorithm consists of always choosing the highest value coin or bill that
does not exceed the remaining amount.

Example:
A customer buys an item that costs 53 euros and pays with a 200-euro bill. You need
to give them 147 euros in change. One way to do this is by giving them a 100-euro
bill, two 20-euro bills, one 5-euro bill, and a 2-euro coin.

To minimize the number of coins to return, the following strategy appears:

5
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

 Start by giving the largest possible coin.


 Subtract this value from the remaining amount.
 Repeat until the remaining amount is zero.

Exercise: Write a Python program to solve the change-making problem using a


greedy algorithm.

2. knapsack problem :

The Knapsack Problem is a classic optimization problem in computer science and


mathematics. It's a good example to illustrate both the strengths and limitations of
greedy algorithms.

In this problem, you have a knapsack with capacity C and a list of objects with
weights and values. The goal is to fill the knapsack to maximize the total value of the
objects while respecting the knapsack's capacity. This problem is very important and
has many variations and applications in different fields.

There are two main variants of the Knapsack Problem:

 0/1 Knapsack Problem: Each item can either be included (1) or not included
(0). This version is NP-hard and cannot be solved optimally with a simple
greedy approach.
 Fractional Knapsack Problem: Items can be broken into smaller pieces, so
the thief can take fractions of items. This version can be solved optimally
using a greedy algorithm.

Here are the main greedy algorithms used to solve the knapsack problem:

1. Value/Weight Ratio Algorithm:

 Calculate the value/weight ratio for each item.


 Sort the items in decreasing order of their ratios.
 Fill the knapsack by taking items in this order.
 Stop when the capacity is reached.
6
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

This algorithm provides a near-optimal solution for the knapsack in terms of value.

2. Smallest Remaining Weight Algorithm:

 Sort the items by increasing order of weight.


 Fill the knapsack by taking items in this order.
 Stop when the capacity is reached.

This algorithm maximizes the number of items in the knapsack.

3. Largest Profit Algorithm:

 Sort the items in decreasing order of value.


 Fill the knapsack by taking items in this order.
 Stop when the capacity is reached.

This algorithm seeks to maximize total value, even if the number of items is not
maximal.

Example of the Value/Weight Algorithm:

Consider a knapsack with a capacity of 10 kg and the following items:

 Item 1: Value 4, Weight 5


 Item 2: Value 3, Weight 4
 Item 3: Value 2, Weight 3
 Item 4: Value 1, Weight 1

The greedy algorithm works as follows:

1. Calculate the value/weight ratio for each item:


o Item 1: 4/5 = 0.8
o Item 2: 3/4 = 0.75
o Item 3: 2/3 = 0.66
o Item 4: 1/1 = 1
2. Sort in decreasing order of ratios:
o Item 4, Item 1, Item 2, Item 3
3. Fill the knapsack by adding items one by one in this order until the capacity
limit is reached:
o Add Item 4 (1 kg), then Item 1 (5 kg), then Item 2 (4 kg). The
knapsack is now full at 10 kg.

The greedy solution is {Item 4, Item 1, Item 2} for a total value of 4 + 1 + 3 = 8. This
is the optimal solution for this knapsack problem.

Exercise: Write the three greedy algorithms in Python.

3. Prim's Algorithm for Minimum Spanning Tree:

7
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

Minimum Spanning Tree (MST) is a fundamental concept in graph theory and is


widely used in network design, clustering, and other optimization problems. It's an
excellent example of where greedy algorithms shine.

Prim's algorithm solves the problem of finding a minimum spanning tree (MST) in an
undirected, weighted graph. It starts from an arbitrary node and iteratively adds the
closest node (or the one with the smallest weight) to the partially constructed subtree.

 Initialization:

1. Choose an arbitrary starting vertex to begin the construction of the


minimum spanning tree (MST).
2. Create an empty set to store the vertices included in the spanning tree.

 Loop Step: Repeat the following steps until all vertices are included in the
spanning tree:

1. Find the lowest-weight Edge: Among the edges connecting a vertex


already included in the spanning tree to a vertex not yet included, select the
edge with the minimum weight.
2. Add the Vertex and Edge:

a) Add the vertex connected by the selected edge to the set of vertices
included in the spanning tree.
b) Add the selected edge to the spanning tree as well.

 Result: The minimum spanning tree of the initial graph.

Correctness:

Prim's algorithm always produces the correct MST because of the cut property:
For any cut in a graph, the minimum weight edge crossing the cut is in the
MST.

8
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

4. The Travelling Salesman Problem (TSP)

The Travelling Salesman Problem (TSP) involves finding the shortest path that
visits each city exactly once and returns to the starting city. This problem is important
due to its numerous applications in transportation, networks, and other fields.

Figure : an example of TSP problem

Greedy Algorithms for Solving the TSP

A. Nearest Neighbor Algorithm

The Nearest Neighbor Algorithm is a greedy technique that solves the TSP by
choosing the nearest unvisited city at each step. Its principle is simple: at each step,
the algorithm selects the nearest unvisited city from the current city and adds it to the
ongoing tour. This process continues until all cities have been visited, with the final
step being to return to the starting city to complete the cycle. Here’s a summary of the
Nearest Neighbor Algorithm in key steps:

1. Initialization: Choose an arbitrary starting city to begin the tour.


2. Selecting the Next City: At each step, from the current city, select the nearest
unvisited city based on distance. This involves calculating the distance
between the current city and all other unvisited cities, then choosing the one
with the smallest distance.
3. Updating the Tour: Add the chosen city to the ongoing tour and mark it as
visited.
4. Repeating: Repeat steps 2 and 3 until all cities have been visited.
5. Returning to the Starting City: Once all cities have been visited, add the
final step to return to the starting city and close the cycle.
6. Calculating the Total Distance: Calculate the total distance traveled by
summing the distances between the cities in the order of the tour.

9
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

Although the Nearest Neighbor Algorithm is easy to implement and provides a


quick solution for small instances of the TSP, it doesn’t always guarantee the optimal
solution. In some cases, it may produce tours that are locally optimal but not globally
optimal, resulting in significantly longer routes than the optimal one.

Example: Consider a set of cities with their distances between two cities:

ABCDE
A 0 8 6 5 9
B 8 0 4 7 3
C 6 4 0 5 2
D 5 7 5 0 6
E 9 3 2 6 0

Suppose we start at city A (0). Here’s how the Nearest Neighbor Algorithm would
be applied:

1. Start at city A.
2. The nearest city to A is city D (distance = 5).
3. From city D, the nearest city is city C (distance = 5).
4. From city C, the nearest city is city E (distance =2).
5. From city E, the nearest city is city B (distance = 3).
6. Finally, return to city A from city E (distance = 8).

The resulting path is A → D → C → E → B → A. The total distance traveled is 5 + 5


+ 2+ 3 + 8 = 23. However, this solution may not be optimal.

Task: Write the Nearest Neighbor Algorithm in Python.

B. Cheapest Insertion Algorithm

 Start with a cycle containing only one city. (we can pick two cities and create
a tour between them)
 At each step, choose the unvisited city that minimizes the total distance if
inserted into the existing cycle.
 Insert the selected city at the appropriate position to form a longer cycle.
 Repeat these steps until all cities are visited.

This algorithm is slightly more complex than the Nearest Neighbor Algorithm, but it
can produce better solutions in some cases.

5. Greedy Algorithms for Resource Allocation problem

The Resource Allocation Problem is a type of bin packing or memory allocation


problem where a set of resources (like memory blocks, processors, or storage) need to
be assigned to tasks in the most efficient way possible. The problem can be
approached using different strategies such as First Fit, Best Fit, and Worst Fit, etc.

10
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

1. First Fit (FF):

 Go through the items one by one in the order they are given.
 For each item, find the first available container that can accommodate it
without exceeding its capacity.
 Place the item in that container and mark it as used. If no container can fit the
item, create a new container.

2. Best Fit (BF):

 Go through the items one by one in the order they are given.
 For each item, find the available container with the smallest available space
that can accommodate the item.
 Place the item in that container. If no container fits, create a new one.

3. Worst Fit (WF):

 Go through the items one by one in the order they are given.
 For each item, find the container with the largest available space and place the
item there. If no container fits, create a new one.

4. Next Fit (NF):

 Start with an empty container.


 For each item, try placing it in the most recently used container. If it fits, place
it there; otherwise, create a new container and place the item in it.

Conclusion

 Greedy algorithms provide a simple and efficient way to solve many


combinatorial problems by making locally optimal choices at each step.

11
Intitulé du Master : Science de Données et Intelligence Artificielle Semestre : S1
Intitulé de la matière: Advanced Operation Research Prof. LAYEB A.

 They don’t always guarantee a globally optimal solution but often give good
approximations.
 Problems like shortest paths, assignment problems, and knapsack problems are
well-suited to greedy approaches.
 Classic algorithms like Prim, Kruskal, and Huffman demonstrate the
efficiency of the greedy technique for these problems.
 Although greedy design is simple, proving the optimality for a given problem
can be challenging.
 Dynamic programming is more reliable for ensuring optimal solutions.

Greedy construction algorithms are intuitive, efficient, and widely used for

References
 Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. "Introduction
to algorithms." Massachusetts Institute of Technology (2009).
 Jeff Erickson. "Algorithms." University of Illinois at Urbana-Champaign (2022).
https://jeffe.cs.illinois.edu/teaching/algorithms/

12

You might also like