100% found this document useful (1 vote)
97 views

Dynamic Programming Vs Greedy MEthod

The presentation gives us an introduction to dynamic and greedy approaches to form an efficient algorithm.

Uploaded by

SIDDHARTH NAHAR
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
97 views

Dynamic Programming Vs Greedy MEthod

The presentation gives us an introduction to dynamic and greedy approaches to form an efficient algorithm.

Uploaded by

SIDDHARTH NAHAR
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Dynamic

Vs
Greedy Method
28 - Vishwesh Meher
32 - Mohommad Daanish Shaikh
35 - Siddhi Mundada Group
36 - Tejas Murkya
39 - Siddharth Nahar
5
Supervised By :- Prof. Milind Kamble
Contents of this Presentation
1. Introduction.
2. Feasible and Optimal Solution.
3. Introduction to Greedy Method.
4. Introduction to Dynamic Programming.
5. Knapsack Problem.
6. Shortest Path Algorithms.
7. Final Comparison between Greedy and Dynamic
Algorithms..
Introduction
Dynamic and greedy are two methods or
rather approaches to form a feasible and an
optimal algorithm for a given problem.
Feasible and Optimal solution

“A feasible solution to a problem is a set of answers that


satisfies all the given constraints”
Students with marks in range : 0-100; Students(80-100) = ?

the students who fall under this category are basically our set
of feasible solution.

“Optimization problems usually involve finding a


maximum result or a minimum result.
You can also say that an optimal solution is a subset of
your feasible solution.”
Greedy Method
This method involves choosing the
best immediate solution to a sub
problem, expecting to find a global
optimum.
Characteristics of Greedy
Algorithm
● It is a method that builds a solution piece by piece.
● It uses piece that offers the most obvious and
immediate benefit.
● This method is used to solve problems where choosing
the local optimum value leads to global optimization.
Dynamic
Programming
Dynamic Programming is a method of
writing algorithms that requires results of of
previous sub-problems within the main
problem.
Characteristics of Dynamic programming

● Overlapping of sub-problems.
● Keeps a record of results of sub-problems through
memoization or tabulation.
● Uses the recorded results to calculate the final result.
01
Knapsack Problem
Fractional & 0-1
Problem Statement
1. Knapsack problem is also called as rucksack problem.

2. It is a problem in combinatorial optimization.

3. Knapsack problem states that given a set of items, each with a mass and a
value, determine the number of each item to include in a collection so that the
total weight is less than or equal to a given limit and the total value is as large
as possible.
There are two versions of the problem:

a. 0/1 Knapsack Problem: Items are indivisible; you either take an item or not. Can
be solved with dynamic programming.

b. Fractional knapsack problem: Items are divisible; you can take any fraction of an
item. Solved using greedy method.
P:10
1 KG P:5 4 KG
15 KG

2 KG P:7 P:3 3 KG

12 KG
P:6
Fractional Knapsack
1. Compute the value per pound Fractional Knapsack Problem for each item.

2. Obeying a Greedy Strategy, we take as much possible of the item with the
highest value per kg.

3. If the supply of that element is exhausted and we can still carry more, we take
as much as possible of the element with the next value per kg.

4. Sorting, the items by value per kg, the greedy algorithm run in O (n log n)
time.
1. Let us consider that the capacity of the knapsack W = 60 and the list of
provided items are shown in the following table −
3. Solution
(1,1,½,
0)

4. The total weight


of the selected items
is 10 + 40 + 20 *
2. As the provided items are not sorted based on pi/wi. After sorting, (10/20) = 60
the items are as shown in the following table.
And the total profit is
100 + 280 + 120 *
(10/20) = 380 + 60 =
440
0-1 Knapsack Problem
1. Item cannot be broken which means we should take the item as a
whole or should leave it.
2. We want to pack n items in your knapsack(bag)
Input :
● Knapsack of capacity- W
● List (Array) of weight and their corresponding value. - w(i) and v(i)
Output : To maximize profit

Conditions :
1. Value ← Max
2. W ≤ capacity
Steps to follow(Tabulation)
Consider-
● Knapsack weight capacity = w
● Number of items each having some weight and value = n

Step-01:
● Draw a table say ‘T’ with (n+1) number of rows and (w+1) number of
columns.
● Fill all the boxes of 0th row and 0th column with zero.
Step-02:

● Start filling the table row wise, top to bottom from left to right.
● Use the following formula-

T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti )


}
Example : For the given set of items and knapsack capacity = 5 kg, find the optimal
solution for the 0/1 knapsack problem making use of dynamic programming
approach.

Given-
● Knapsack capacity (w) = 5
kg
● Number of items (n) = 4
Step-01:
● Draw a table say ‘T’ with (n+1) = 4 + 1 = 5
number of rows
● (w+1) = 5 + 1 = 6 number of columns.
● Fill all the boxes of 0th row and 0th column with 0.
Step-02:
● Start filling the table row wise, top to bottom from left to right using the formula-
● T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

Finding T(1,1) -
We have,
● i=1, j=1
● (value)i = (value)1 = 3 & (weight)i =
(weight)1 = 2
Substituting the values, we get-
T(1,1) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) }
T(1,1) = max { T(0,1) , 3 + T(0,-1) }
T(1,1) = T(0,1) { Ignore T(0,-1) }
T(1,1) = 0
Step-03: Similarly, compute all the entries.

Time Complexity : O(n*W)


02
Shortest Path Algorithms
Dijkstra & Bellman-Ford
Dijkstra's
Algorithm
It is an algorithm for finding the
shortest path from a starting
node to a target node.
Dijkstra's Algorithm

1. Create a set that keeps track of vertices included in the shortest-path tree.Initially, this
set is empty.
2. Assign a distance value to all vertices in the input graph. Initialize all distance values as
INFINITE. Assign distance value as 0 for the source vertex so that it is picked first.
3. While set doesn’t include all vertices
a. Pick a vertex u which is not there in Set and has a minimum distance value.
b. Include u to Set.
c. Update distance value of all adjacent vertices of u. To update the distance values,
iterate through all adjacent vertices. For every adjacent vertex v, if the sum of
distance value of u (from source) and weight of edge u-v, is less than the distance
value of v, then update the distance value of v.
[ S],B,
,B
] ]D,
D ]A,
A ]C,
C E,
S ] ,Des]
C,
E,
F F
E ]]

Destinatio
n
Analysis of Dijkstra’s Algorithm
Case-01:

This case is valid when-


● The given graph G is represented as an adjacency matrix.
● Priority queue Q is represented as an unordered list. Adjacency Matrix

Here,
● Thus, total time complexity becomes O(V^2).
E = Edges, V = Vertices
Analysis of Dijkstra’s Algorithm
Case-02:
This case is valid when-
● The given graph G is represented as an adjacency list.
● Priority queue Q is represented as a binary heap. Adjacency List

Here,
● Time complexity is O(E+VlogV).

E = Edges, V = Vertices
Binary Heap
Single Source Shortest
Path
(Bellman Ford Algorithm)
Bellman - Ford Algorithm
It is an algorithm for finding the shortest path from a starting node to a target node.

1. Initialize the first node as 0 and rest of all the vertices as infinity.
2. Relax all the elements for (|v| - 1) times.
a. Pick a vertex u which is not there in Set and has a minimum distance value.
b. Include u to Set.
c. Update distance value of all adjacent vertices of u. To update the distance values,
iterate through all adjacent vertices. For every adjacent vertex v, if the sum of
distance value of u (from source) and weight of edge u-v, is less than the distance
value of v, then update the distance value of v.
3. The best part of this algorithm is, this algorithm is a grade-up of Dijkstra's algorithm
where the negative distances cause the outcome.
Bellman - Ford Algorithm
Bellman - Ford Algorithm

A B Relaxing condition:
If ((d[u] + d[u,v]) <
d[v]){
d[v] = d[u] +
d[u,v]
}

C D
Some important points in
Bellman - Ford Algorithm

1. The optimal solution should be attained within |v| -


A B
1 no of times of relaxation. (point 3)
2. This could be checked by relaxing one more time.
3. If there is a cycle formation within the path of more
than 3 vertices then the sum of distances of all
distances if is negative then we would not get the
optimal solution i.e. the process would go on C D
working.
Final Comparison
GREEDY METHOD DYNAMIC PROGRAMMING

1. Looks at the best immediate solution. 1. Develops a Global optimum solution.

2. Greedy method is only gives an 2. Gives a guaranteed optimum solution


optimal solution when the problem as we are considering all possibilities
has a greedy choice pattern. It may not and then chooses the best.
give proper solution to all problems.
3. Memory complexity increases due to
3. More efficient in terms of memory. solution storage at each step.

4. Greedy algorithms are faster as we do 4. Dynamic programming algorithms are


not have to compute multiple sub generally slower due to computation
problems. of all possible cases.
Thank You !
CREDITS: This presentation template was
created by Slidesgo, including icons by
Flaticon and infographics & images by
Freepik

You might also like