0% found this document useful (0 votes)
16 views22 pages

MCSL 216 Journal Section 1 Complete

The document outlines a lab session focused on the design and analysis of algorithms, specifically implementing simple algorithms such as Euclid's algorithm for GCD, Horner's method for polynomial evaluation, and matrix multiplication. It also introduces the Fractional Knapsack problem, explaining its greedy algorithm approach to maximize profit while adhering to weight constraints. The document provides detailed examples, objectives, and coding implementations for the algorithms discussed.

Uploaded by

Amit Gaurav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views22 pages

MCSL 216 Journal Section 1 Complete

The document outlines a lab session focused on the design and analysis of algorithms, specifically implementing simple algorithms such as Euclid's algorithm for GCD, Horner's method for polynomial evaluation, and matrix multiplication. It also introduces the Fractional Knapsack problem, explaining its greedy algorithm approach to maximize profit while adhering to weight constraints. The document provides detailed examples, objectives, and coding implementations for the algorithms discussed.

Uploaded by

Amit Gaurav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o .

9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |1

MCSL – 216

SECTION 1

DESIGN AND ANALYSIS OF


ALGORITHMS LAB
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |2

Session 1: Implementation of Simple Algorithms


Introduction
 The focus of the section is to implement small problems such as Euclid’s algorithm for GCD,
polynomial expression evaluation through Horner’ method, algorithms for evaluation of exponentiation
and simple sorting algorithms.
 Selection sort begins by finding the least element in the array which is moved to the first position. Then
the second least element is searched for and moved the second position in the array. This process
continues until the entire array is sorted.

Objectives
The main objectives of the algorithms are to:
 Implement Euclid’s algorithm to find GCD
 Implement Horner’s method to evaluate polynomial expression
 Compare the performance of Horner’s method with brute force method
 Implement simple sorting algorithms
 Implement multiplication of two matrices

Problems for Implementation

Q.1. Implement Euclid’s algorithm to find GCD (15265, 15)and calculate the number of
times mod and assignment operations will be required.
Ans.
 Euclid’s algorithm is a method for finding the greatest common divisor (GCD) of two numbers.
 The algorithm works by repeatedly applying the division algorithm to find the remainder when one
number is divided by the other, and then replacing the larger number with the smaller number and the
smaller number with the remainder.

 To find the GCD of 15265 and 15 using Euclid’s algorithm, we start by dividing the larger number
by the smaller number and then replacing the larger number with the smaller number and the smaller
number with the remainder.
 We continue this process until the smaller number becomes 0, at which point the GCD is the
remaining non-zero number.

 Here’s how the algorithm works step by step:

 Step 1:
 15265 ÷ 15 = 1017 with a remainder of 10
 Assignments: 2 (assign 1017 to the larger number, assign 15 to the smaller number)
 Mod operations: 1 (finding the remainder when 15265 is divided by 15)

 Step 2:
 15 ÷ 10 = 1 with a remainder of 5
 Assignments: 3 (assign 15 to the larger number, assign 10 to the smaller number)
 Mod operations: 1 (finding the remainder when 15 is divided by 10)
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |3
 Step 3:
 10 ÷ 5 = 2 with a remainder of 0
 Assignments: 4 (assign 10 to the larger number, assign 5 to the smaller number)
 Mod operations: 1 (finding the remainder when 10 is divided by 5)

 Step 4:
 5 ÷ 0 -> we stop here because the smaller number is 0.
 Assignments: 4 (no more assignments are made)
 Mod operations: 1 (finding the remainder when 5 is divided by 0)

 After Step 3, since the smaller number is 0, the GCD is the non-zero number from the last step,
which is 5.

 So, the GCD of 15265 and 15 is 5.

 In total, there were 4 assignments and 4 mod operations required to find the GCD using Euclid’s
algorithm.

Q.3. Implement multiplication of two matrices A [4, 4] and B [4, 4] and calculate (i) how
many times the innermost and the outermost loops will run (ii) total number of
multiplications and additions in computing the multiplication.
Ans.
Let’s go through the algorithm for matrix multiplication and analyse the number of times the
innermost and outermost loops will run, as well as the total number of multiplications and additions.

 Matrix Multiplication Algorithm:


 Suppose you have two matrices A [4x4] and B [4x4].
 The resulting matrix C [4x4] will be obtained by multiplying the corresponding elements and
summing them up.

Step 1:
Start with the outermost loop for rows of matrix A.
Step 2:
Inside the outer loop, have a nested loop for columns of matrix B.
Step 3:
Inside the nested loop, have another loop for iterating through the common dimension (columns of A
or rows of B).
Step 4:
Multiply the corresponding elements of A and B and accumulate the result in the corresponding
element of C.

 Example:

o This gives an overview of the matrix multiplication algorithm and its analysis using a small
example.
o Let’s consider small values for matrices A and B for simplicity:
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |4
Matrix A: Matrix B:
|2 3 1 4| |3 1 4 2|
|5 2 3 7| |5 6 2 1|
|1 6 2 3| |3 4 7 2|
|4 5 6 1| |1 2 3 5|

Matrix C (Result):
| 28, 32, 33, 29 |
| 41, 43, 66, 53 |
| 42, 51, 39, 27 |
| 56, 60, 71, 30 |

 Analysis:

 Outermost Loop Iterations:


For a matrix of size NxN, the outermost loop will run N times. In this example, it will run 4
times.
 Innermost Loop Iterations:
The innermost loop, responsible for the element-wise multiplication and addition, will run N
times for each element. So, for a 4x4 matrix, it will run 64 times.
 Total Number of Multiplications:
For each element, there is one multiplication. In this example, there are 4x4x4 = 64
multiplications.
 Total Number of Additions:
For each element, there are N-1 additions (summing up N-1 products). In this example, there
are 4x4x3 = 48 additions.

 Here’s a Python code that implements the multiplication of two matrices A and B
and calculates the number of times the innermost and the outermost loops will
run, as well as the total number of multiplications and additions:

 Coding:

N=4

Def multiply_matrices(A, B):


Result = [[0 for _ in range(N)] for _ in range(N)]

For i in range(N): # Outermost loop (rows of A)


For j in range(N): # Inner loop for columns of B
Result[i][j] = 0 # Initialize result element to 0
For k in range(N): # Innermost loop (common dimension)
Result[i][j] += A[i][k] * B[k][j] # Multiply and accumulate
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |5

Return result

# Example matrices A and B


A=[
[2, 3, 1, 4],
[5, 2, 3, 7],
[1, 6, 2, 3],
[4, 5, 6, 1]
]
B=[
[3, 1, 4, 2],
[5, 6, 2, 1],
[3, 4, 7, 2],
[1, 2, 3, 5]
]
# Multiply matrices A and B
Result_matrix = multiply_matrices(A, B)

# Output the result matrix


Print(“Result Matrix:”)
For row in result_matrix:
Print(row)

# Analysis
Outermost_loops = N
Innermost_loops = N * N * N # N rows * N columns * N iterations
Total_multiplications = N * N * N # N rows * N columns * N iterations
Total_additions = N * N * (N – 1) # N rows * N columns * (N – 1) iterations
Print(“\n(i) Number of times outermost loop runs:”, outermost_loops)
Print(“(ii) Number of times innermost loop runs:”, innermost_loops)
Print(“Total number of multiplications:”, total_multiplications)
Print(“Total number of additions:”, total_additions)
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |6

 Result Matrix:
[28, 32, 33, 29]
[41, 43, 66, 53]
[42, 51, 39, 27]
[56, 60, 71, 30]

(i) Number of times outermost loop runs: 4


(ii) Number of times innermost loop runs: 64

Total number of multiplications: 64


Total number of additions: 48

Session 2: Fractional Knapsack Problem

Introduction
 In Greedy algorithm, the solution that looks best at a moment is selected with the hope that it will
lead to the optimal solution.
 This is one of the approaches to solve optimization problems.
 This is an optimization problem which we want to solve it through greedy technique.
 In this problem, a Knapsack (or bag) of some capacity is considered which is to be filled with
objects.
 We are given some objects with their weights and associated profits.
 The problem is to fill the given Knapsack with objects such that the sum of the profit associated with
the objects that are included in the Knapsack is maximum.
 The constraint in this problem is that the sum of the weight of the objects that are included in the
Knapsack should be less than or equal to the capacity of the Knapsack.
 However, the objects can be included in fractions to the Knapsack because of which this problem is
termed as Fractional Knapsack problem.

Objectives
The main objectives of the session are to:

o Implement Fractional Knapsack Problem


o Test the implementation on different problem instances
o Implement greedy technique in general.

Problems for Implementation

Implement Fractional Knapsack algorithm and find out optimal result for the following problem instances:
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |7

Q.1. (P1, P2, P3, P4, P5, P6, P7) = (15, 5, 20, 8, 7, 20, 6)
(W1, W2, W3, W4,W5,W6,W7) = (3, 4, 6, 8, 2, 2, 3)
Maximum Knapsack Capacity = 18

Ans.

 Defination:

 The Knapsack algorithm refers to a dynamic programming approach used to solve optimization
problems where a set of items, each with a specific weight and value, must be selected to
maximize the total value within a given weight capacity constraint.
 The algorithm is widely applied in various fields, such as resource allocation, finance, and
logistics, where efficient decision-making is crucial in selecting items with the best overall value
while adhering to resource limitations.
 The Knapsack problem is categorized into two main types: 0/1 Knapsack, where items cannot be
divided, and Fractional Knapsack, where items can be broken into fractions to optimize the
solution.

 Basic Formula:

 The Knapsack problem involves selecting a combination of items with specific weights and
values to maximize the total value within a given weight capacity constraint. The basic formula
for the Knapsack algorithm is:

Contain
EXi*Wi<=M

Objective
Max.EXi*Pi

X Obtained
0<=X<=1

 Optimal Solution that gives maximum Profit with given Values Example :
N=Object, M=Capacity, P=Profit Value, W=Weight

Object O = 1,2,3,4,5,6,7
Profit P = 15,5,20,8,7,20,6
Weight W = 3,4,6,8,2,2,3

Step 1:
(To Find Profit/ Weight Ratio)
EP/W = P1/W1 =15/3=5
=P2/W2 =5/4=1.25
=P3/W3 =20/6=3.33
=P4/W4 =8/8=1
=P5/W5 =7/2=3.5
=P6/W6 =20/2=10
=P7/W7 =6/3=2
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |8

Step 2:
(Arrange this profit/weight ratio in non-increasing order as n values) Since the highest
profit/weight ratio is 10. That is p * 6 / w * 6 so 1st value is 6. Second highest profit/weight ratio is
5. That is p * 1 / w * 1 so 2nd value is 1. Similarly, calculate such n values and arrange them in non-
increasing order.
P Order = (P6, P1, P5, P3, P7, P2, P4)
P Ratio = (10, 5, 3.5, 3.33, 2, 1.25, 1)

Step 3:
Selection of highest profit value which defines as Value 1 (1 is selected for Bag) and non-
selected cell prescribed by Value 0 (0 is Non-selected) Described Fraction of X with 0 or 1 which
Contains Weigh Capacity Constraints with more profits.
P _ P6, P1, P5, P3, P7, P2, P4
Xi Fraction _ 1, 1, 1, 1, 1, 2/4(0.5), 0

Step 4:
Total capacity is 18 and less weight process over the given capacity step by step with selected
cell value by 1 of Xi.
1st one highest profit cell weight is 2 and its less over the total capacity 18 = 16 remaining
capacity of bag.
2nd one highest profit cell weight is 3 and it’s less over the remaining capacity of bag 16 = 13
Now, 13 is remaining capacity of the dbag.
3rd Continue the 1st and 2nd process untill remaining capacity of bag is equal to zero.

Step 5:
Now, Xi Fraction multiply with Wi contains
E.XiWi = 1*2 + 1*3 + 1*2 + 1*6 + 1*3 + 0.5*4 + 0*8
E.XiWi = 2+3+2+6+3+2+0
E.XiWi = 18

Step 6:
Now, Xi Fraction multiply with Pi contains
E.XiPi = 1*20 + 1*15 + 1*7 + 1*20 + 1*6 + 0.5*5 + 0*8
E.XiPi = 20+15+7+20+6+2.5+0
E.XiWi = 70.5

Final Optimal Value is 70.5 with


Optimal Knapsack Weight 18

Session 3 : Task Scheduling Algorithm

Introduction
 A task scheduling problem is formulated as an optimization problem in which we need to
determine the set of tasks from the given tasks that can be accomplished within their deadlines
along with their order of scheduling such that the profit is maximum.
 So, this is a maximization optimization problem with a constraint that tasks must be completed
within their specified deadlines.
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 Pa ge |9

Objectives
The main objectives of this session are to:
 Implement a task scheduling algorithm
 Test the algorithm on different problem instances
 Differentiate between a brute force approach and an efficient task scheduling algorithm

Problems for Implementation

Q.1. Apply a brute force approach to schedule three jobs J1, J2 and J3 with service
times as 5,8,12 respectively. The actual service time units are not relevant to the
problems. Make all possible job schedules , calculate the total times spent in jobs by the
system. Find the optimal schedule (total time). If there are N jobs , what would be the
total number of job schedules?

Ans.

 A brute force approach is a straightforward, exhaustive search algorithm that systematically


explores all possible solutions to a problem.
 It involves systematically checking all possible options and choosing the one that meets the
criteria or solves the problem.
 In the context of scheduling jobs, a brute force approach would mean considering all possible
permutations or combinations of job orders and evaluating each one to find the optimal schedule.
 This method guarantees finding the best solution but may become impractical for large problem
instances due to the sheer number of combinations to check.
 Essentially, it’s like trying every possible combination without employing any specific
optimization or heuristic techniques.
 While effective for small-scale problems, it may not be the most efficient solution for larger,
more complex scenarios.

 Let’s approach this step by step.


 First, let’s list all possible job schedules for the three jobs (J1, J2, J3).
 The total number of schedules for N jobs can be calculated using the factorial function, denoted
as N! (N factorial).
 For three jobs (J1, J2, J3), the schedules are as follows:
J1, J2, J3
J1, J3, J2
J2, J1, J3
J2, J3, J1
J3, J1, J2
J3, J2, J1

 Now, let’s calculate the total time spent for each schedule.
 Assuming that the jobs are executed sequentially:
1) J1, J2, J3: Total Time= 5+8+12=25
2) J1, J3, J2: Total Time= 5+12+8=25
3) J2, J1, J3: Total Time= 8+5+12=25
4) J2, J3, J1: Total Time= 8+12+5=25
5) J3, J1, J2: Total Time= 12+5+8=25
6) J3, J2, J1: Total Time= 12+8+5=25
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 10

 Now, the optimal schedule is the one with the minimum total time, which in this case is all
schedules as they have the same total time (25 units).
 The total number of job schedules for N jobs can be calculated using N! (N factorial). For three
jobs, it’s 3! = 3 x 2 x 1 = 6.
 So, there are 6 possible job schedules for three jobs. If there are N jobs, the total number of job
schedules would be N!

Session 4: Huffman’s Coding Algorithm


Introduction
 Huffman coding is a greedy algorithm that is used to compress the data.
 Data can be sequence of characters and each character can occur multiple times in a data called as
frequency of that character.
 Basically compression is a technique to reduce the size of the data.
 Huffman coding compress data by 70-80%.
 Huffman algorithm checks the frequency of each data, represent those data in the form of Huffman
tree and build an optimal solution to represent each character as a binary string.
 Huffman coding is also known as variable length coding.

Objectives
The main objectives of this session are to:
 Implement Huffman’s Coding algorithm.
 Test the implementation on different problem instances
 Construct the optimal binary prefix code

Problems for Implementation

Q.1. Implement Huffman’s coding algorithm and run on the problem instance of

Letters AB IM S XZ
Frequency 10 7 15 8 10 5 2

Show the complete steps.

Ans.
 Huffman’s Coding Algorithm is a greedy algorithm used for lossless data compression.
 Here are the steps to implement Huffman’s Coding Algorithm for the given problem instance:

 Step 1:
Create Nodes for each letter and their respective frequencies:
A: 10
B: 7
I: 15
M: 8
S: 10
X: 5
Z: 2
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 11

 Step 2:
Create initial priority queue (min-heap) with the nodes:
Z: 2
X: 5
B: 7
M: 8
A: 10
S: 10
I: 15
 Step 3:
Build the Huffman Tree:
 Merge the two nodes with the lowest frequencies:
o Combine Z and X: (ZX) - frequency 7
o Priority queue: (B: 7), (M: 8), (ZX: 7), (A: 10), (S: 10), (I: 15)
 Merge the two nodes with the lowest frequencies:
o Combine B and ZX: (BZX) - frequency 14
o Priority queue: (M: 8), (BZX: 14), (A: 10), (S: 10), (I: 15)
 Merge the two nodes with the lowest frequencies:
o Combine M and BZX: (MBZX) - frequency 22
o Priority queue: (A: 10), (S: 10), (MBZX: 22), (I: 15)
 Merge the two nodes with the lowest frequencies:
o Combine A and S: (AS) - frequency 20
o Priority queue: (AS: 20), (MBZX: 22), (I: 15)
 Merge the two nodes with the lowest frequencies:
o Combine AS and I: (AS I) - frequency 35
o Priority queue: (AS I: 35), (MBZX: 22)
 Merge the two nodes with the lowest frequencies:
o Combine AS I and MBZX: (AS I MBZX) - frequency 57
o Priority queue: (AS I MBZX: 57)
 Step 4:
Generate Huffman Codes:
• Traverse the Huffman Tree:
o Left edge represents 0, right edge represents 1.
o Assign codes to each letter:
A: 0
B: 10
I: 11
M: 110
S: 1110
X: 1111
Z: 11101
 Step 5:
So, the Huffman Codes for the given problem instance are:
A: 0
B: 10
I: 11
M: 110
S: 1110
X: 1111
Z: 11101
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 12

Session 5: Divide and Conquer Technique


Introduction
 In Divide and conquer approach, the original problem is divided into two or more sub-problems
recursively, till it is small enough to be solved easily.
 Each sub-problem is some fraction of the original problem.
 Next, the solutions of the sub-problems are combined together to generate the solution of the
original problem.

Objectives
The main objectives of this session are to:
 Write recurrence relation of a problem
 Implementations of Binary Search, Merge Sort and Quick Sort algorithms
 Draw a tree of recursive calls of recursive calls

Problems for Implementation


Q.3. Implement Merge Sort algorithm to sort the following list show the process step by
step. 200 150 50 100 75 25 10 5 Draw a tree of recursive calls in this problem.
Ans.
Let's implement the Merge Sort algorithm step by step for the given list:
200, 150, 50, 100, 75, 25, 10, 5.

 Step 1: Initial List


Given list: [200, 150, 50, 100, 75, 25, 10, 5]

 Step 2: Divide
Divide the list into two halves: [200, 150, 50, 100] and [75, 25, 10, 5]
 Step 3: Recursively Sort
Recursively sort each half:
1. Sorting [200,150,50,100]:

 Divide: [200,150] and [50,100]


 Recursively sort:
[200] and [150] (for the first half)
No further division needed as both sub lists have only one element.
 Recursively sort:
[50] and [100] (for the second half)
No further division needed as both sub lists have only one element.
 Merge the sorted halves:
[50,100,150,200]
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 13
2. Sorting [75,25,10,5]:

 Divide: [75,25]and[10,5]
 Recursively sort:
[75] and [25] (for the first half)
No further division needed as both sub lists have only one element.
 Recursively sort:
[10] and [5] (for the second half)
No further division needed as both sub lists have only one element.
 Merge the sorted halves:
[5,10,25,75]

 Step 4: Merge

Merge the two sorted halves from Step 3:


[50, 100, 150, 200] and [5, 10, 25, 75]

 Step 5: Final Sorted List

The final sorted list is [5, 10, 25, 50, 75, 100, 150, 200]

 Recursive Calls Tree:

[200, 150, 50, 100, 75, 25, 10, 5]

[200, 150, 50, 100] [75, 25, 10, 5]

[200, 150] [50, 100] [75, 25] [10, 5]

[200] [150] [50] [100] [75] [25] [10] [5]


C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 14

Q.4. Implement Quick Sort’s algorithm on your machine to do sorting of the following
list of elements 12 20 22 16 25 18 8 10 6 15 Show step by step processes.
Ans.
 Here is the steps of the Quick Sort algorithm for the given list [12, 20, 22, 16, 25, 18, 8, 10, 6, 15].
 The basic idea of Quick Sort is to partition the array and recursively sort each partition.

 Step 1: Initial List:


[12, 20, 22, 16, 25, 18, 8, 10, 6, 15]
 Step 2: Choose Pivot:
(Select a pivot element from the array. Common choices include the first, last, or a random element.)
Let's choose the last element, 15, as the pivot.
 Step 3: Partitioning:
Rearrange the elements in the array so that all elements less than the pivot are on the left, and all
elements greater than the pivot are on the right. The pivot itself is now in its final sorted position.
[12, 10, 6, 8, 15, 18, 25, 16, 22, 20]
 Step 4: Recursive Steps:
(Apply the Quick Sort algorithm recursively to the left and right sub arrays created by the
partitioning step.)
• Apply Quick Sort to the left partition ([12, 10, 6, 8, 15]).
• Apply Quick Sort to the right partition ([18, 25, 16, 22, 20]).
 Step 5: Repeat:
Continue this process recursively until the base case is reached (when the sub arrays have
only one element).
 Step 6: Sorted List:
Concatenate the sorted left partition, the pivot, and the sorted right partition.
 Final Sorted List: [6, 8, 10, 12, 15, 16, 18, 20, 22, 25]

Session 6: Single Source Shortest Path Algorithm


Introduction
 Dijkstra’s algorithm solves the single-source shortest path problem when all edges have non-negative
weights.
 It is a very similar to Prim’s algorithm and always choose the path that are optimal right now but not
for global optimizations

Objectives
Main objectives are to:
 Implement Dijkstra’s algorithm to implement a single source shortest path using greedy technique
 Apply the implemented algorithm on different problem instances
 Represent a graph
 Find out the similarities in implementations of Prim’s algorithm and Dijkstra’s algorithm
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 15

Problems for Implementation


Implement Dijkstra’s algorithm to find the single source shortest path algorithm from different
sources to the rest of nodes in the following graph and show all the intermediate processes:

2
B
6 C 2

4
A 4 8
G D

3 2 1

E
F
7

Q.1. Find the shortest path from A to the rest of vertices.


Ans.
Let’s consider using Dijkstra’s algorithm for finding the shortest path from vertex A to the rest of the
vertices.
 Dijkstra’s Algorithm :
 1. Initialization :
 Set the distance to A as 0 and distances to all others vertices as infinity.
 Create a priority queue to store vertices and their distances.
 2. Processing :
 Extract the vertex with the minimum distance from the priority queue.
 If the distance to u plus the weight of the edge is shorter than the current known
distance to u. Update the distance to u.
 3. Result :
 The final distances represent the shortest paths from A to all other vertices.

 Analysis :
 Dijkstra’s algorithm has a time complexity of o((V+E)log V), where V is the number
of vertices and E is the number of edges.
 It guarantees the shortest path in a graph with non-negative edge weights.
 It is particularly efficient when the graph is spares.

 Here is the table of Dijkstra’s algorithm to find the Single Shortest Path algorithm in diagram above
Starting node A.
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 16

Selected A B C D E F G
Node
G 0 6 8 8 8 8 4
B 0 6 8 12 8 8 4
C 0 6 8 12 8 8 4
D 0 6 8 10 11 12 4
E 0 6 8 10 11 12 4
F 0 6 8 10 11 12 4

Session 7: Minimum Cost Spanning Tree


Introduction
 A connected sub graph S of Graph G (V, E) is said to be spanning tree if and only if it contains all
the vertices of the graph G and have minimum total weight of the edges of G.
 Spanning tree must be acyclic.
 A Spanning tree whose weight is minimum over all spanning trees is called a minimum spanning tree
or MST

Objectives
The main objectives of the session are to:
 Implement Prim’s algorithm to find a minimum cost spanning tree
 Implement Kruskal’s algorithm to find a minimum cost spanning tree
 Differentiate between the implementations of two algorithms
 Implementation of disjoint data structure

Problems for Implementation


Q.1. Implement Prim’s algorithm to find a minimum cost spanning tree (MCST) in the
following graph. Show all the processes.

35
V1 V2

20 10 50

V3 V4 V5 V6
22 12 30
8 5 30 9

V7 V8 V9 V10
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 17
60 8 16

Ans.
It is a greedy algorithm that star from vertex and continue adding edges with the smallest weight
until the goal is reached.

 Steps to Implement the Prim’s Algorithm :


 First we have to initialize a minimum spanning tree with randomly chosen vertex.
 Now, we have to find all the edges that connect the tree in the above step with new
vertices from the edges found select the minimum edge and add it to the tree.
 Repeat Step 2 until the minimum spanning tree is formed.
 Application.
 This is the arrangement how we select the vertices.
 V1-V4, V4-V8, V4-V2, V4-V5, V8-V9, V9-V10, V10-V6, V8-V4, V3-V7
 20, 5, 10, 12, 8, 16, 9, 22, 8
 The Minimum Cost of the tree is 110.

Session 8: Implementation of Binomial Coefficient Problem


Introduction
 Binomial Coefficient is a part of permutation and combination.
 Computing binomial coefficients is non optimization problem but can be solved using dynamic
programming.
 Binomial Coefficient is the coefficient in the Binomial Theorem which is an arithmetic expansion.
 It is denoted as c (n, k) which is equal to n! / k! × (n-k)! Where! Denotes factorial.

Gives combinations to choose k elements from n-element set. We can compute n ck for any n
and k using the recurrence relation as follows:

1, if k=0 or n=k
n ck = {n-1 ck-1+n-1ck, for n>k>0

 It is possible to compute binomial coefficient in linear time 0(n×k) using Dynamic Programming.

Objectives
The main objectives of this section are to:
 understand the application of binomial coefficient
 define binomial coefficient recursively
 solve the binomial coefficient problem through divide and conquer and dynamic programming
techniques
 do performance analysis of both techniques on different problem instances for large and small values
of n and k
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 18

Problems for Implementation


Q.1. Implement a binomial coefficient problem using Divide and Conquer technique.
Ans.
 The binomial coefficient problem can be efficiently solved using a divide-and-conquer approach.
 The binomial coefficient, often denoted as C(n, k), represents the number of ways to choose k
elements from a set of n elements without regard to the order.

 Here are the algorithmic steps for computing the binomial coefficient using the divide-and-conquer
technique:

 Step 1: Base Case:


 If k=0 or k=n, return 1 since there is only one way to choose 0 or n elements from a set.

 Step 2: Recursive Case:


 If k is neither 0 or n, recursively compute the binomial coefficient using the following formula :
C(n,k) = C(n-1,k-1) + C(n-1,k)

 Step 3: Combine:
 Sum the results obtained from the two recursive calls in the recursive case.

 Step 4 : Return:
 Return the final result as the binomial coefficient C(n,k).

 Divide and conquer divides the problem of size C(n, k), in two sub problems, each of size C(n – 1, k
– 1) and C(n – 1, k) respectively.
 Solution of larger problem is build by adding solution of these two sub problems.
 Structure of binomial coefficient problem using divide and conquer approach is described as :

C(n,k) = C(n-1/k-1) + C(n-1/k) = C(n-1,k-1) + C(n-1,k)


C(n,0) = C(n,n) = 1

Where n>k>0

 Recursive algorithm of solving binomial coefficient problem using divide and conquer approach is
described below :

Algorithm BINOMIAL_DC (n, k)


// n is total number of items
// k is the number of items to be selected from n

If k == 0 or k == n then
Return 1
Else
Return DC_BINOMIAL(n – 1, k – 1) + DC_BINOMIAL(n – 1, k)
End
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 19

 Following tree shows how problem subdivision occurs and how solutions are merged to build the
final solution

 TREE GRAPH :

 As we can see, we have to do lots of rework in calculation, due to independent sub problems.
 While in case of dynamic programming, problems are dependent, and we can utilize previously
calculated result for future use.

Session 9: Floyd and Warshall’s Algorithm for All Pair Shortest Path
Problems
Introduction
 Unlike the single source shortest path algorithm by Dijkstra’s, Floyd Wars hall’s all pair shortest
path algorithm computes the shortest path between any two vertices of a graph at one go.
 Floyd Warshall Algorithm uses the Dynamic Programming (DP) methodology.
 Unlike Greedy algorithms which always looks for local optimization, DP strives for global
optimization that means DP does not rely on the intermediate (local) best results.

Objectives
Main objectives of this session are to:
 Implement Floyd and Wars hall’s algorithm using dynamic programming
 Analyse the performance of the algorithm using different graph instances( large graphs, small
graphs)

Problems for Implementation


Q.1. Apply the Floyd and Warshall’s algorithm for the following graph. Show the
matrix D5 of the graph and find out the shortest path.

2 5
4
2 3

1 3 2
-6
-5 8

5 4
6
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 20

Ans.
 The Floyd-Warshall algorithm is commonly used for solving the All Pairs Shortest Path (APSP)
problem.
 Here is the Step-by-step algorithm for the Floyd-Warshall algorithm is :

 Step 1: Initialization :
 Create a matrix ‘dist’ of size V x V, where V is the number of vertices.
 Initialize it with the weights of the edges between the vertices.
 Set the diagonal elements to 0.
 Step 2 : Main Loop :
 For each intermediate vertex k, iterate through all pairs of source vertex I and destination
vertex j.
 Step 3 : Update Distance :
 For each pair (i,j), check if the distance from I to j through k is shorter than the current
distance.
 If so, update the distance.
 Step 4 : Result :
 The final matrix ‘dist’ will contain the shortest distances between all pairs of vertices.

 Explore with above graph :

DRAW PENDING GRAPH :

1 2 3 4 5
1 0 4 2 ~ -5
2 ~ 0 ~ 2 8
3 ~ 5 0 ~ ~
4 -3 ~ -6 0 ~
5 ~ ~ ~ 6 0

Session 10: Chained Matrix Multiplication


Introduction
 The problem of matrix-chain-multiplication is the process of selecting the optimal pair of matrices in
every step in such a way that the overall cost of multiplication would be minimal.
 If there are total N matrices in the sequence then the total number of different ways of selecting
matrix-pairs for multiplication will be 2nCn/(n+1).
 We need to find out the optimal order for multiplying n matrices.

Objectives
The main objectives of the sessions are to:
 List all the different orders in which we can multiply matrices
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 21
 Implement the chained matrix multiplications using dynamic programming technique

Problems for Implementation


Q.1. List different orders for evaluating the product of A, B, C, D, E matrices.
Ans.
The order in which you evaluate the product of matrices A, B, C, D and E can impact the final result.
 1. Standard Order : A x B x C x D x E
 Start with A then multiply by B, C, D and E sequentially.

 2. Reverse Order : E x D x C x B x A
 Start with E then multiply by D, C, B and A sequentially.

 3. Random Order : B x D x A x E x C
 Choose a random order, for example, B, D, A, E, C

 4. Grouped Order : (A x B) x (C x D x E)
 Group certain metrics together and multiply them before considering the other group.

 5. Alternate Order : A x C x B x D x E
 Multiply in an alternating order, for example, A, C, B, D, E

 6. Custom Order :
 Define a specific order based on priority or domain knowledge.
 For example, if one matric is more critical prioritize its multiplication.

 7. Based on Dependencies :
 Consider dependencies between metrics.
 If there are dependencies where the value of one metric depends on another respect those
dependencies in the order.

Session 11: Optimal Binary Search Tree


Introduction
 The problem of optimal binary search tree is, given a keys and their probabilities, we have to
generate a binary search tree such that the searching cost should be minimum.
 Using dynamic programming we can find the optimal binary search tree without drawing each tree
and calculating the cost of each tree.
 Thus, dynamic programming gives better, easiest and fastest method for trying out all possible binary
search tree and picking up best one without drawing each sub tree.

Objectives
The main objectives of this session are to:
 Determine the cost and structure of an optimal binary search tree
 Implement an optimal binary search tree and study the performance of the algorithm using different
problem instances
C r e a t e d b y : VA I B H AV PA N C H M AT I YA – M o . 9 0 1 6 3 3 4 3 0 1 MCSL-216-1 P a g e | 22

Problems for Implementation


Determine the cost and structure of an optimal binary search tree for a set of n=7 keys with the following
properties. Show the step by step processes.

Q.1. Implement the optimal binary search tree algorithm on your system and study the
performance of the algorithm on different problem instances.
Ans.
 Optimal Binary Search Tree Algorithm :
1. Define the problem :
 Define the set of keys and their probabilities or frequencies.
 Define cost functions for searches and unsuccessful searches.
2. Initialize Tables :
 Create tables to store optimal cost values and root indices.
3. Fill Tables (Bottom-Up Approach) :
 Use dynamic programming to fill the tables based on sub problems solutions.
4. Reconstruct Tree :
 Based on the information stored in the tables, reconstruct the optimal binary search
tree.
5. Analysis :
 Time Complexity.
 Space Complexity.

 Performance Study :

1. Varying Key Set :


 Test the algorithm with different sets of keys to observe how it scales.
2. Varying Probabilities :
 Explore scenarios with different key probabilities to understand the impact on the tree
structure.
3. Cost Function Sensitivity :
 Modify cost functions for searches and unsuccessful searches to observe their
influence on the tree.
4. Large Data Sets :
 Evaluate performance on larger data sets to access stability.

 Tools for Analysis :

1. Profiling :
 Use profiling tools to analyse time and space complexity.
2. Visualizations :
 Create Visualizations of the constructed trees and cost tables for better understanding.
3. Benchmarking :
 Benchmark the algorithm against different problem instances and compare results.

You might also like