0% found this document useful (0 votes)
8 views

Design & Analysis of algorithm- 1 & 2

The document outlines the course on Design & Analysis of Algorithms at Parul University for the July-Dec. 2024 session, detailing prerequisites, objectives, textbooks, and reference materials. It covers fundamental concepts of algorithms, their characteristics, types, and the importance of analyzing their efficiency through time and space complexity. Additionally, it explains various methods to express algorithms and the significance of asymptotic notations in understanding algorithm performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Design & Analysis of algorithm- 1 & 2

The document outlines the course on Design & Analysis of Algorithms at Parul University for the July-Dec. 2024 session, detailing prerequisites, objectives, textbooks, and reference materials. It covers fundamental concepts of algorithms, their characteristics, types, and the importance of analyzing their efficiency through time and space complexity. Additionally, it explains various methods to express algorithms and the significance of asymptotic notations in understanding algorithm performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 171

Parul University

Session: July-Dec. 2024

05/28/2025 Slide No. 1


Parul University

203105374: Design & Analysis of


Algorithm

Date 05/28/2025 Slide No.


Prerequisites
● Before proceeding the subject, you should have a basic
understanding of C programming language, text editor,
and execution of programs, etc.

05/28/2025 Slide No.


Course Objectives
● To understand the basic idea of the problem and find an approach to solve
the problem.
● Analyse and improve the efficiency of existing techniques.
● To understand the basic principles of designing the algorithms compare
the performance of the algorithm with respect to other techniques.
● It is the best method of description without describing the implementation
detail.
● The Algorithm gives a clear description of requirements and goal of the
problem to the designer. A good de.sign can produce a good solution.

05/28/2025 Slide No.


Text Books
1. 1. Thomas H. Coreman, Charles E. Leiserson and Ronald
L. Rivest, “Introduction to Algorithms”, Printice Hall of
India.
2. 2. E. Horowitz & S Sahni, "Fundamentals of Computer
Algorithms",
3. 3. Aho, Hopcraft, Ullman, “The Design and Analysis of
Computer Algorithms” Pearson Education, 2008.

05/28/2025 Slide No.


Reference Books
1. Aho, Hopcroft, Ullman, “Data Structures and Algorithms”,
Pearson Education.
2. N. Wirth, “Algorithms + Data Structure = Programs”, Prentice
Hall.
3. Jean – Paul Trembly , Paul Sorenson, “An Introduction to
Structure with application”, TMH.
4. Richard, GilbergBehrouz, Forouzan ,“Data structure – A
Pseudocode Approach with C”, Thomson press

05/28/2025 Slide No.


Unit-1 Introduction
● Introduction: Algorithms

● Analyzing algorithms

● Complexity of algorithms

● Growth of functions

● Performance measurements Sorting and order Statistics

● Shell sort

● Quick sort

● Merge sort

● Heap sort

● Comparison of sorting algorithm

05/28/2025 Slide No.


What is an Algorithm?

The word Algorithm means ”A set of finite rules or instructions to


be followed in calculations or other problem-solving operations”

Or
A procedure for solving a mathematical problem in a finite number of
steps that frequently involves recursive operations.
Ex. An algorithm to add two numbers:
1. Take two number inputs
2. Add numbers using the + operator
3. Display the result
05/28/2025 Slide No.
05/28/2025 Slide No.
Use of the Algorithms:-
1) Computer Science: simple sorting and searching to complex tasks
2) Mathematics: optimal solution to a system of linear equations or finding the
shortest path in a graph.
3) Operations Research: Transportation, logistics, and resource allocation.
4) Artificial Intelligence: image recognition, natural language processing, and
decision-making.
5) Data Science: Extract insights from large amounts of data in fields such as
marketing, finance, and healthcare.

05/28/2025 Slide No.


What is the need for algorithms:
1. Algorithms are necessary for solving complex problems efficiently and
effectively.
2. They help to automate processes and make them more reliable, faster, and
easier to perform.
3. Algorithms also enable computers to perform tasks that would be difficult or
impossible for humans to do manually.
4. They are used in various fields such as mathematics, computer science,
engineering, finance, and many others to optimize processes, analyze data,
make predictions, and provide solutions to problems.

05/28/2025 Slide No.


What are the Characteristics of an Algorithm?

05/28/2025 Slide No.


Properties of Algorithm:
1. It should terminate after a finite time.
2. It should produce at least one output.
3. It should take zero or more input.
4. It should be deterministic means giving the same output for the same input
case.
5. Every step in the algorithm must be effective i.e. every step should do some
work.

05/28/2025 Slide No.


Types of Algorithms:
● Brute Force Algorithm
● Recursive Algorithm
● Backtracking Algorithm
● Searching Algorithm
● Sorting Algorithm
● Hashing Algorithm
● Divide and Conquer Algorithm
● Greedy Algorithm
● Dynamic Programming Algorithm
● Randomized Algorithm:

05/28/2025 Slide No.


Algorithm 1: Add two numbers entered by the user
● Step 1: Start
● Step 2: Declare variables num1, num2 and sum.
● Step 3: Read values num1 and num2.
● Step 4: Add num1 and num2 and assign the result to sum.
sum←num1+num2
● Step 5: Display sum
● Step 6: Stop

05/28/2025 Slide No.


Python program to add two numbers

def addNum(a, b):


return a+b

a = int(input("Enter number one "))


b = int(input("Enter number two "))
print(addNum(a,b))
Algorithm 2: Find the largest number among three numbers
● Step 1: Start
● Step 2: Declare variables a,b and c.
● Step 3: Read variables a,b and c.
● Step 4: If a > b
If a > c
Display a is the largest number.
Else
Display c is the largest number.
Else
If b > c
Display b is the largest number.
Else
Display c is the greatest number.
● Step 5: Stop
05/28/2025 Slide No.
Python program to find largest number among three
numbers
def maximum(a, b, c):
if (a>=b) and (a>=c):
largest = a
elif(b>=a) and (b>=c):
largest = b
else:
largest = c
return largest

a = int(input(" Enter number one "))


b = int(input(" Enter number two "))
c = int(input(" Enter number three"))

print(maximum(a,b,c))
Analyzing algorithms
● Analysis of algorithms is the determination of the amount of time and space
resources required to execute it.
● Usually, the efficiency or running time of an algorithm is stated as a function
relating the input length to the number of steps, known as time complexity,
or volume of memory, known as space complexity.
● Analysis of algorithms is the determination of the amount of time and space
resources required to execute it.

05/28/2025 Slide No.


The Need for Analysis
● Algorithms are often quite different from one another, though the objective
of these algorithms are the same.
● For example, we know that a set of numbers can be sorted using different
algorithms.
● Number of comparisons performed by one algorithm may vary with others
for the same input.
● Hence, time complexity of those algorithms may differ. At the same time, we
need to calculate the memory space required by each algorithm.

05/28/2025 Slide No.


How to analyze an Algorithm?
1) Priori Analysis: Priori” means “before”. Hence Priori analysis means checking the
algorithm before its implementation. In this, the algorithm is checked when it is
written in the form of theoretical steps. This Efficiency of an algorithm is measured
by assuming that all other factors, for example, processor speed, are constant and
have no effect on the implementation. It gives the approximate answers for the
complexity of the program.
2) Posterior Analysis:“Posterior” means “after”. Hence Posterior analysis means
checking the algorithm after its implementation. In this, the algorithm is checked by
implementing it in any programming language and executing it. This analysis helps
to get the actual and real analysis report about correctness(for every possible input/s
if it shows/returns correct output or not), space required, time consumed etc. That
is, it is dependent on the language of the compiler and the type of hardware used.

05/28/2025 Slide No.


Complexity of Algorithms
● An algorithm is defined as complex based on the amount of Space and Time it
consumes.
● The Complexity of an algorithm refers to the measure of the Time that it will need to
execute and get the expected output, and the Space it will need to store all the data
(input, temporary data and output).
● Hence these two factors define the efficiency of an algorithm.
● Time Factor: Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
● Space Factor: Space is measured by counting the maximum memory space required
by the algorithm to run/execute.

05/28/2025 Slide No.


Complexity of an algorithm can be divided into two types:
● Time Complexity: The time complexity of an algorithm refers
to the amount of time that is required by the algorithm to
execute and get the result. This can be for normal operations,
conditional if-else statements, loop statements, etc.

● Space Complexity: The space complexity of an algorithm


refers to the amount of memory required by the algorithm to
store the variables and get the result. This can be for inputs,
temporary operations, or outputs.

05/28/2025 Slide No.


How to calculate Time Complexity?
● The time complexity of an algorithm is also calculated by determining the
following 2 components:
● Constant time part: Any instruction that is executed just once comes in this
part. For example, input, output, if-else, switch, arithmetic operations etc.
● Variable Time Part: Any instruction that is executed more than once, say n
times, comes in this part. For example, loops, recursion, etc.
● Therefore Time complexity of any algorithm P is T(P) = C + TP(I), where C
is the constant time part and TP(I) is the variable part of the algorithm, which
depends on the instance characteristic I.

05/28/2025 Slide No.


Calculating Time Complexity
● Example: Consider the below algorithm
for Linear Search ● Example: In the algorithm of Linear
Search above, the time complexity is
● Step 1: START calculated as follows:
Step 2: Get n elements of the array in arr ● Step 1: –Constant Time 1
and the number to be searched in x Step 2: — Constant Time (Taking x
Step 3: Start from the leftmost element input). 1
of arr[] and one by one compare x with Step 3: –Variable Time (Till the length of
each element of arr[] the Array (n) or the index of the found
Step 4: If x matches with an element, element) n
Print True. Step 4: –Constant Time 1
Step 5: If x doesn’t match with any of Step 5: –Constant Time 1
the elements, Print False. Step 6: –Constant Time 1
Step 6: END Hence, T(P) = 5 + n, which can be said
as T(n).

05/28/2025 Slide No.


How to calculate Space Complexity?
● The space complexity of an algorithm is calculated by determining the following 2
components:
● Fixed Part: This refers to the space that is definitely required by the algorithm. For
example, input variables, output variables, program size, etc.
● Variable Part: This refers to the space that can be different based on the
implementation of the algorithm. For example, temporary variables, dynamic memory
allocation, recursion stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is
the fixed part and S(I) is the variable part of the algorithm, which depends on instance
characteristic I.
● Here, There are 2 variables arr[], and x, where the arr[] is the variable part of n
elements and x is the fixed part. Hence S(P) = 1+n. So, the space complexity depends
on n(number of elements). Now, space depends on data types of given variables and
constant types and it will be multiplied accordingly.

05/28/2025 Slide No.


How to express an Algorithm?
1. Natural Language :- Here we express the Algorithm in natural English language. It
is too hard to understand the algorithm from it.
2. Flow Chat :- Here we express the Algorithm by making graphical/pictorial
representation of it. It is easier to understand than Natural Language.
3. Pseudo Code :- Here we express the Algorithm in the form of annotations and
informative text written in plain English which is very much similar to the real code
but as it has no syntax like any of the programming language, it can’t be compiled or
interpreted by the computer. It is the best way to express an algorithm because it can
be understood by even a layman with some school level programming knowledge.

05/28/2025 Slide No.


Cases of Analysis
1. Worst-case − Define the input for which algorithm takes a long time or maximum
time. In the worst calculate the upper bound of an algorithm. Example: In the linear
search when search data is not present at all then the worst case occurs.
2. Best-case − Define the input for which algorithm takes less time or minimum time.
In the best case calculate the lower bound of an algorithm. Example: In the linear
search when search data is present at the first location of large data then the best
case occurs
3. Average case − In the average case take all random inputs and calculate the
computation time for all inputs.
And then we divide it by the total number of inputs.
Average case = all random case time / total no of case

05/28/2025 Slide No.


Asymptotic Analysis of algorithms (Growth of function)

● Asymptotic notations are used to write fastest and slowest possible running time for
an algorithm. These are also referred to as 'best case' and 'worst case' scenarios
respectively.
● "In asymptotic notations, we derive the complexity concerning the size of the input.
(Example in terms of n)"
● "These notations are important because without expanding the cost of running the
algorithm, we can estimate the complexity of the algorithms.“
● Asymptotic Notation is a way of comparing function that ignores constant factors and
small input sizes.

Let f (n) = an2+bn+c

05/28/2025 Slide No.


Types of asymptotic notations
● There are mainly three asymptotic notations:
1) Big-O notation
2) Omega notation
3) Theta notation

05/28/2025 Slide No.


Big-O Notation (O-notation)
● Big-O notation represents the upper bound of the running time of an algorithm. Thus,
it gives the worst-case complexity of an algorithm.

○ O(g(n)) = { f(n): there exist positive constants c and n 0 such that 0 ≤


f(n) ≤ cg(n) for all n ≥ n0 }

● The above expression can be described as a function f(n) belongs to the set O(g(n)) if
there exists a positive constant c such that it lies between 0 and cg(n), for sufficiently
large n.
● For any value of n, the running time of an algorithm does not cross the time provided
by O(g(n)).

05/28/2025 Slide No.


Omega Notation (Ω-notation)
● Omega notation represents the lower bound of the running time of an algorithm. Thus,
it provides the best case complexity of an algorithm.

Ω(g(n)) = { f(n): there exist positive constants c and n 0 such that 0 ≤


cg(n) ≤ f(n) for all n ≥ n0 }
● The above expression can be described as a function f(n) belongs to the set Ω(g(n)) if
there exists a positive constant c such that it lies above cg(n), for sufficiently large n.
● For any value of n, the minimum time required by the algorithm is given by
Omega Ω(g(n)).

05/28/2025 Slide No.


Theta Notation (Θ-notation)
● Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.
● For a function g(n), Θ(g(n)) is given by the relation:

Θ(g(n)) = { f(n): there exist positive constants c 1, c2 and n0 such that


0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }\
● The above expression can be described as a function f(n) belongs to the set Θ(g(n)) if
there exist positive constants c1 and c2 such that it can be sandwiched
between c1g(n) and c2g(n), for sufficiently large n.
● If a function f(n) lies anywhere in between c 1g(n) and c2g(n) for all n ≥ n0, then f(n) is
said to be asymptotically tight bound.

05/28/2025 Slide No.


O(1)constant time
● This means that the algorithm requires the same fixed number of
steps regardless of the size of the task.
● Example:
1) A statement involving basic operations
Here are some examples of basic operations.
• One arithmetic operation (eg., +, *)
• One assignment(x=5)
• One test (eg., x==0)
• One read(accessing an element from an array)

05/28/2025 Slide No.


Example of O(1) Constant time Complexity

public static int sumN(int n){


int ans = n*(n+1)/2;
return ans;
}
CONT…
2) Sequence of statements involving basic operations.
statement 1;
statement 2;
..........
statement k;
● Time for each statement is constant and the total time is also
constant: O(1)

05/28/2025 Slide No.


O(n)linear time
● This means that the algorithm requires a number of steps
proportional to the size of the task.
Examples:
1. Traversing an array.
2. Sequential/Linear search in an array.
3. Best case time complexity of Bubble sort (i.e when the elements
of array are in sorted order).

05/28/2025 Slide No.


CONT…
● Basic structure is :
for (i = 0; i < N; i++)
{
sequence of statements of O(1)
}
● The loop executes N times, so the total time is N*O(1) which is
O(N).

05/28/2025 Slide No.


Example of O(n) Time Complexity

public static int sumN(int n){ n=5


int count = 0;
I=1 || count = 1
for (int i = 1;i <= n;i++){ I=2 || count = 1+2
I=3 || count = 1+2+3
count+=i; I=4 || count = 1+2+3+4
} I=5 || count = 1+2+3+4+5

return count; Total Count = 15


}
O(N2) quadratic time
● The number of operations is proportional to the size of the task
squared.
Examples:
1) Worst case time complexity of Bubble, Selection and Insertion sort.
Nested loops:
a) for (i = 0; i < N; i++) {
b) for (j = 0; j < M; j++) {
sequence of statements of O(1)
}
}
05/28/2025 Slide No.
CONT...
The outer loop executes N times and inner loop executes M times so
the time complexity is O(N*M)

2). for (i = 0; i < N; i++) {


for (j = 0; j < N; j++) {
sequence of statements of O(1)
}
}
Now the time complexity is O(N^2)

05/28/2025 Slide No.


CONT…
3. let's consider nested loops where the number of iterations of the
inner loop depends on the value of the outer loop's index.
for (i = 0; i < N; i++) {
for (j = i+1; j < N; j++) {
sequence of statements of O(1)
}
}

05/28/2025 Slide No.


CONT…
Let us see how many iterations the inner loop has:
Value of i Number of iterations of inner loop
0 N-1
1 N-2
..... .......
N-3 2
N-2 1
N-1 0

05/28/2025 Slide No.


CONT…
● So the total number of times the “sequence of statements” within
the two loops executes is :
(N-1)+(N-2)+.....2+1+0
which is N*(N-1)/2
or
(1/2)* (N2)-(1/2)* N
● and we can say that it is O(N2) (we can ignore multiplicative
constant and for large problem size the dominant term determines
the time complexity)

05/28/2025 Slide No.


O(log n) logarithmic time
Examples:
1. Binary search in a sorted array of n elements.
O(n log n)- "n log n " time
Examples:
● 1. MergeSort, QuickSort etc.

05/28/2025 Slide No.


Time complexity for traversing the array and multiply
the increment pointer by 2

public static int countSum(int N){


int count = 0;
for(int i = 1; i < N; i*=2){
count++;
}
return count;
}
Time Complexity = O(k)
Iteration No. i

Step 2 -> i < N 1 i = 1 -> 20 -> 21-1

Step 3 -> 2k < N 2 i = 2 -> 21 -> 22-1

Step 4 -> log2(2k) < log2(N) 3 i = 4 -> 22 -> 23-1

4 i = 8 -> 23 -> 24-1


Step 5 -> k log22 < log2N
5 i = 16 -> 24 -> 25-1
Step 6 -> value of log2 => 1
2
.
. .
Step 7 -> k*1 = k < log2N . .

.
Step 8 -> final expression => k < log 2 N

kth 2k-1

Time Complexity – O(log n)


(k+1)th 2(k+1)-1 => 2k
● Using the "<" sign informally, we can say that the order of growth
is-
O(l) < O(log n) < O(n) < O(n log n) < O(n^2) < O(n^3) <
O(a^n)
where a>1

05/28/2025 Slide No.


Example:
{perform any statement S1} O(1)
for (i=0; i < n; i++)
{
{perform any statement(s) S2} O(n)
{run through another loop n times} O(n2)
}
● Total Execution Time: O(1) + O(n) +O(n2) therefore, O(n2)

05/28/2025 Slide No.


Master Method
● The Master Method is used for solving the following types of recurrence
● T (n) = a T+ f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be
interpreted as
● Let T (n) is defined on non-negative integers by the recurrence.
■ T (n) = a T+ f (n)
● In the function to the analysis of a recursive algorithm, the constants and function
take on the following significance:
● n is the size of the problem.
● a is the number of subproblems in the recursion.
● n/b is the size of each subproblem. (Here it is assumed that all subproblems are
essentially the same size.)

05/28/2025 Slide No.


Cont..
● f (n) is the sum of the work done outside the recursive calls, which includes the sum
of dividing the problem and the sum of combining the solutions to the subproblems.
● It is not possible always bound the function according to the requirement, so we make
three cases which will tell us what kind of bound we can apply on the function.

05/28/2025 Slide No.


Example
● Case1: If f (n) = for some constant ε >0, then it follows that:

T (n) = Θ
● Example:
T (n) = 8 T apply master theorem on it.

● Solution:
Compare T (n) = 8 T with

T (n) = a T
a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3
05/28/2025 Slide No.
Cont…
● Put all the values in: f (n) =
● 1000 n2 = O (n3-ε )
● If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2)
● Since this equation holds, the first case of the master theorem applies to the given
recurrence relation, thus resulting in the conclusion:
● T (n) = Θ Therefore: T (n) = Θ (n3)

05/28/2025 Slide No.


● Case 2: If it is true, for some constant k ≥ 0 that:
F (n) = Θ then it follows that:
T (n) = Θ

T (n) = 2 , solve the recurrence by using the master method.


As compare the given problem with T (n) = a T
a = 2, b=2, k=0, f (n) = 10n, logba = log22 =1
Put all the values in f (n) =Θ , we will get
10n = Θ (n1) = Θ (n) which is true. K=0
Therefore: T (n) = Θ
= Θ (n log n)

05/28/2025 Slide No.


Example:
● Case 3: If it is true f(n) = Ω for some constant ε >0 and it also true that:
af for some constant c<1 for large value of n ,then :
T (n) = Θ((f (n))
● Example: Solve the recurrence relation:

T (n) = 2

● Solution:Compare the given problem with T (n) = a T


a= 2, b =2, f (n) = n2, logba = log22 =1
Put all the values in f (n) = Ω ………. (Eq. 1)
If we insert all the value in (Eq.1), we will get

05/28/2025 Slide No.


If we insert all the value in (Eq.1), we will get
n2 = Ω(n1+ε) put ε =1, then the equality will hold.
n2 = Ω(n1+1) = Ω(n2)
● Now we will also check the second condition:

● If we will choose c =1/2, it is true:


∀ n ≥1

● So it follows: T (n) = Θ ((f (n))


T (n) = Θ(n2)

05/28/2025 Slide No.


Examples

05/28/2025 Slide No.


Shell Sort Algorithm
● It is a sorting algorithm that is an extended version of insertion sort. Shell sort has
improved the average time complexity of insertion sort.
● As similar to insertion sort, it is a comparison-based and in-place sorting algorithm.
Shell sort is efficient for medium-sized data sets.
● In insertion sort, at a time, elements can be moved ahead by one position only. To
move an element to a far-away position, many movements are required that increase
the algorithm's execution time.
● But shell sort overcomes this drawback of insertion sort. It allows the movement and
swapping of far-away elements as well.
● This algorithm first sorts the elements that are far away from each other, then it
subsequently reduces the gap between them. This gap is called as interval.

05/28/2025 Slide No.


Examples
● Let the elements of array are –

● We will use the original sequence of shell sort, i.e., N/2, N/4,....,1 as the intervals.
● In the first loop, n is equal to 8 (size of the array), so the elements are lying at the
interval of 4 (n/2 = 4). Elements will be compared and swapped if they are not in
order.
● Here, in the first loop, the element at the 0 th position will be compared with the
element at 4th position. If the 0th element is greater, it will be swapped with the
element at 4th position. Otherwise, it remains the same. This process will continue for
the remaining elements.

05/28/2025 Slide No.


● At the interval of 4, the sublists are {33, 12}, {31, 17}, {40,
25}, {8, 42}.

05/28/2025 Slide No.


Now, we have to compare the values in every sub-list. After comparing, we have to swap
them if required in the original array. After comparing and swapping, the updated array will
look as follows

● In the second loop, elements are lying at the interval of 2 (n/4 = 2), where n = 8.
● Now, we are taking the interval of 2 to sort the rest of the array. With an interval of 2,
two sublists will be generated - {12, 25, 33, 40}, and {17, 8, 31, 42}.

05/28/2025 Slide No.


● Now, we again have to compare the values in every sub-list. After comparing, we have
to swap them if required in the original array. After comparing and swapping, the
updated array will look as follows –

● In the third loop, elements are lying at the interval of 1 (n/8 = 1), where n = 8. At last,
we use the interval of value 1 to sort the rest of the array elements. In this step, shell
sort uses insertion sort to sort the array elements.

05/28/2025 Slide No.


Insertion Sort applied:-

05/28/2025 Slide No.


Shell Sort Algorithm
● #include <stdio.h>
● /* function to implement shellSort */
● int shell(int a[], int n)
● {
● /* Rearrange the array elements at n/2, n/4, ..., 1 intervals */
● for (int interval = n/2; interval > 1; interval /= 2)
● {
● for (int j = interval; j < n; j ++)
● {
● for (i = j-interval; i >= 0; i -= interval)
● {

05/28/2025 Slide No.


Cont…
● if(a[i+ interval] = a[i];
● {
● break;
● }
● else
● swap(a[i+ interval] , a[i]);
● }
● }
● }

05/28/2025 Slide No.


Time Complexity:
● Time complexity of the above implementation of Shell sort is O(n 2). In the above
implementation, the gap is reduced by half in every iteration. There are many other
ways to reduce gaps which leads to better time complexity.
● Worst Case Complexity
The worst-case complexity for shell sort is O(n2)
● Best Case Complexity
When the given array list is already sorted the total count of comparisons of each
interval is equal to the size of the given array.
So best case complexity is Ω(n log(n))
● Average Case Complexity
● The shell sort Average Case Complexity depends on the interval selected by the
programmer. θ(n log(n)2).
● Average Case Complexity: O(n*log n)~O(n1.25)

05/28/2025 Slide No.


Divide and Conquer
● Divide and Conquer is an algorithmic paradigm. A typical Divide
and Conquer algorithm solves a problem using following three
steps.
1. Divide: Break the given problem into sub-problems of same type.
2. Conquer: Recursively solve these sub-problems
3. Combine: Appropriately combine the answers

05/28/2025 Slide No.


Divide And Conquer algorithm :
DAC(a, i, j)
{
if(small(a, i, j))
return(Solution(a, i, j))
Else
m = divide(a, i, j) // f1(n)
b = DAC(a, i, mid) // T(n/2)
c = DAC(a, mid+1, j) // T(n/2)
d = combine(b, c) // f2(n)
return(d)
}
05/28/2025 Slide No.
Recurrence Relation for DAC algorithm :

● O(1) if n is small
● T(n) = f1(n) + 2T(n/2) + f2(n)

05/28/2025 Slide No.


Cont…
● A classic example of Divide and Conquer is Merge
Sort demonstrated below. In Merge Sort, we divide array into
two halves, sort the two halves recursively, and then merge
the sorted halves.
● The following are some standard algorithms that follows
Divide and Conquer algorithm.
○ Binary Search
○ Quick sort
○ Merge Sort
○ Closest Pair of Points
05/28/2025 Slide No.
● Merge Sort

05/28/2025 Slide No.


Program to implement divide and conquer
#include<stdio.h>
/* Function to calculate x raised to the power y */
int power(int x, unsigned int y)
{
if (y == 0)
return 1;
else if (y%2 == 0)
return power(x, y/2)*power(x, y/2);
else
return x*power(x, y/2)*power(x, y/2);
}
05/28/2025 Slide No.
/* Program to test function power */
int main()
{
int x = 2;
unsigned int y = 3;
printf("%d", power(x, y));
return 0;
}

05/28/2025 Slide No.


Quick Sort
● Quick Sort is a Divide and Conquer algorithm. It picks an element
as pivot and partitions the given array around the picked pivot.
● It is also called partition-exchange sort. This algorithm divides
the list into three main parts:
○ Elements less than the Pivot element
○ Pivot element(Central element)
○ Elements greater than the pivot element

05/28/2025 Slide No.


Cont…
● There are many different versions of quick Sort that pick pivot in
different ways.
○ Always pick first element as pivot.
○ Always pick last element as pivot (implemented below)
○ Pick a random element as pivot.
○ Pick median as pivot.

05/28/2025 Slide No.


Cont…
● Pivot element can be any element from the array, it can be the first
element, the last element or any random element. In this tutorial,
we will take the rightmost element or the last element as pivot.
● For example: In the array {52, 37, 63, 14, 17, 8, 6, 25}, we
take 25 as pivot. So after the first pass, the list will be changed like
this.
● {6 8 17 14 25 63 37 52}

05/28/2025 Slide No.


How quick sort works
Following are the steps involved in quick sort algorithm:
1) After selecting an element as pivot, which is the last index of the array in
our case, we divide the array for the first time.
2) In quick sort, we call this partitioning. It is not simple breaking down of
array into 2 subarrays, but in case of partitioning, the array elements are so
positioned that all the elements smaller than the pivot will be on the left
side of the pivot and all the elements greater than the pivot will be on the
right side of it.
3) And the pivot element will be at its final sorted position.
4) The elements to the left and right, may not be sorted.

05/28/2025 Slide No.


5) Then we pick subarrays, elements on the left of pivot and elements on the
right of pivot, and we perform partitioning on them by choosing
a pivot in the subarrays.

05/28/2025 Slide No.


How does QuickSort work?

05/28/2025 Slide No.


05/28/2025 Slide No.
Quick Sort Algorithm
function partitionFunc(left, right, pivot)
leftPointer = left
rightPointer = right - 1

while True do
while A[++leftPointer] < pivot do
//do-nothing
end while

while rightPointer > 0 && A[--rightPointer] > pivot do


//do-nothing
end while
05/28/2025 Slide No.
if (leftPointer >= rightPointer)
break
else
swap (leftPointer, rightPointer)
end if

end while

swap (leftPointer, right)


return leftPointer

end function

05/28/2025 Slide No.


Function to implement quick sort
/* low --> Starting index, high --> Ending index */
quickSort(arr[], low, high)
{
if (low < high)
{
/* pi is partitioning index, arr[pi] is now at right place*/
pi = partition(arr, low, high);
quickSort(arr, low, pi - 1); // Before pi
quickSort(arr, pi + 1, high); // After pi
}
}
05/28/2025 Slide No.
Pseudo code for partition()
/* This function takes last element as pivot, places the pivot element at its correct
position in sorted array, and places all smaller (smaller than pivot) to left of
pivot and all greater elements to right of pivot */
partition (arr[], low, high)
{
// pivot (Element to be placed at right position)
pivot = arr[high];
i = (low - 1) // Index of smaller element
for (j = low; j <= high- 1; j++)

05/28/2025 Slide No.


{
// If current element is smaller than the pivot
if (arr[j] < pivot)
{
i++; // increment index of smaller element
swap arr[i] and arr[j]
}
}
swap arr[i + 1] and arr[high])
return (i + 1)
}
05/28/2025 Slide No.
Illustration of partition() :
● arr[] = {10, 80, 30, 90, 40, 50, 70}
● Indexes: 0 1 2 3 4 5 6
● low = 0, high = 6, pivot = arr[h] = 70
● Initialize index of smaller element, i = -1
● Traverse elements from j = low to high-1
● j = 0 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])
● i=0
● arr[] = {10, 80, 30, 90, 40, 50, 70}
● //No change as i and j are same
05/28/2025 Slide No.
● j = 1 : Since arr[j] > pivot, do nothing
// No change in i and arr[]
● j = 2 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])
i=1
● arr[] = {10, 30, 80, 90, 40, 50, 70} // We swap 80 and 30
● j = 3 : Since arr[j] > pivot, do nothing // No change in i and arr[]
● j = 4 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])
i=2
● arr[] = {10, 30, 40, 90, 80, 50, 70} // 80 and 40 Swapped
● j = 5 : Since arr[j] <= pivot, do i++ and swap arr[i] with arr[j]

05/28/2025 Slide No.


● i=3
● arr[] = {10, 30, 40, 50, 80, 90, 70} // 90 and 50 Swapped
● We come out of loop because j is now equal to high-1. Finally we place pivot at
correct position by swapping arr[i+1] and arr[high] (or pivot)
● arr[] = {10, 30, 40, 50, 70, 90, 80} // 80 and 70 Swapped
● Now 70 is at its correct place. All elements smaller than 70 are before it and all
elements greater than 70 are after it.

05/28/2025 Slide No.


Example

05/28/2025 Slide No.


Analysis of QuickSort
● Time taken by Quick Sort in general can be written as
following.
T(n) = T(k) + T(n-k-1) + (n)
● The first two terms are for two recursive calls, the last term is
for the partition process. k is the number of elements which
are smaller than pivot.

05/28/2025 Slide No.


Worst Case:
The worst case occurs when the partition process always picks
greatest or smallest element as pivot. If we consider above partition
strategy where last element is always picked as pivot, the worst
case would occur when the array is already sorted in increasing or
decreasing order. Following is recurrence for worst case.
T(n) = T(0) + T(n-1) + Ɵ(n)
which is equivalent to T(n) = T(n-1) + Ɵ(n)

The solution of above recurrence is Ɵ(n2).

05/28/2025 Slide No.


Best Case:
● The best case occurs when the partition process always picks the
middle element as pivot. Following is recurrence for best case.
T(n) = 2T(n/2) + (n)

The solution of above recurrence is (nLogn)

05/28/2025 Slide No.


Average Case:
● To do average case analysis, we need to consider all possible
permutation of array and calculate time taken by every permutation
which doesn’t look easy.
● We can get an idea of average case by considering the case when
partition puts O(n/9) elements in one set and O(9n/10) elements in
other set. Following is recurrence for this case.
T(n) = T(n/9) + T(9n/10) + (n)
● Solution of above recurrence is also O(n Logn)

05/28/2025 Slide No.


● Is QuickSort stable?
The default implementation is not stable. However any sorting
algorithm can be made stable by considering indexes as comparison
parameter.

● Is QuickSort In-place?
As per the broad definition of in-place algorithm it qualifies as an
in-place sorting algorithm as it uses extra space only for storing
recursive function calls but not for manipulating the input.

05/28/2025 Slide No.


Merge Sort
● Merge Sort is a Divide and Conquer algorithm. It divides input array in two
halves, calls itself for the two halves and then merges the two sorted halves.
● The merge() function is used for merging two halves. The merge(arr, l, m, r) is
key process that assumes that arr[l..m] and arr[m+1..r] are sorted and merges the
two sorted sub-arrays into one.

05/28/2025 Slide No.


How Merge Sort Works?
● As we have already discussed that merge sort utilizes divide-and-conquer rule to
break the problem into sub-problems, the problem in this case being, sorting a
given array.
● In merge sort, we break the given array midway, for example if the original array
had 6 elements, then merge sort will break it down into two sub-arrays
with 3 elements each.
● But breaking the original array into 2 smaller sub-arrays is not helping us in
sorting the array.
● So we will break these sub-arrays into even smaller sub-arrays, until we have
multiple sub-arrays with single element in them.

05/28/2025 Slide No.


Cont…
● Now, the idea here is that an array with a single element is
already sorted, so once we break the original array into sub-
arrays which has only a single element, we have successfully
broken down our problem into base problems.
● And then we have to merge all these sorted sub-arrays, step
by step to form one single sorted array.

05/28/2025 Slide No.


In merge sort we follow the following steps:
● We take a variable p and store the starting index of our array in this. And
we take another variable r and store the last index of array in it.
● Then we find the middle of the array using the formula (p + r)/2 and mark
the middle index as q, and break the array into two sub-arrays,
from p to q and from q + 1 to r index.
● Then we divide these 2 sub-arrays again, just like we divided our main
array and this continues.
● Once we have divided the main array into sub-arrays with single
elements, then we start merging the sub-arrays.

05/28/2025 Slide No.


05/28/2025 Slide No.
Algorithm
MERGE_SORT(arr, beg, end)

1. if beg < end


2. set mid = (beg + end)/2
3. MERGE_SORT(arr, beg, mid)
4. MERGE_SORT(arr, mid + 1, end)
5. MERGE (arr, beg, mid, end)
6. end of if
7. END MERGE_SORT

05/28/2025 Slide No.


/* Function to merge the subarrays of a[] */
void merge(int a[], int beg, int mid, int end)
{
int i, j, k;
int n1 = mid - beg + 1;
int n2 = end - mid;
int LeftArray[n1], RightArray[n2]; //temporary arrays
/* copy data to temp arrays */
for (int i = 0; i < n1; i++)
LeftArray[i] = a[beg + i];
for (int j = 0; j < n2; j++)
RightArray[j] = a[mid + 1 + j];
05/28/2025 Slide No.
● i = 0, /* initial index of first sub-array */
● j = 0; /* initial index of second sub-array */
● k = beg; /* initial index of merged sub-array */
● while (i < n1 && j < n2)
● {
● if(LeftArray[i] <= RightArray[j])
● {
● a[k] = LeftArray[i];
● i++;
● }

05/28/2025 Slide No.


● else ● }
● { ● while (j<n2)
● a[k] = RightArray[j]; ● {
● j++; ● a[k] = RightArray[j];
● } ● j++;
● k++; ● k++;
● } ● }
● while (i<n1) ● }
● {
● a[k] = LeftArray[i];
● i++;
● k++;
05/28/2025 Slide No.
Example
● The following diagram shows the complete merge sort
process for an example array {38, 27, 43, 3, 9, 82, 10}.
● If we take a closer look at the diagram, we can see that the
array is recursively divided in two halves till the size becomes
1.
● Once the size becomes 1, the merge processes comes into
action and starts merging arrays back till the complete array
is merged.

05/28/2025 Slide No.


05/28/2025 Slide No.
Implimentation
void mergeSort(int a[], int p, int r)
{
int q;
if(p < r)
{
q = (p + r) / 2;
mergeSort(a, p, q);
mergeSort(a, q+1, r);
merge(a, p, q, r);
}
}
05/28/2025 Slide No.
// function to merge the subarrays
void merge(int a[], int p, int q, int r)
{
int b[5];
//same size of a[]
int i, j, k; k = 0;
i = p;
j = q + 1;
while(i <= q && j <= r)
{ if(a[i] < a[j])

05/28/2025 Slide No.


{
b[k++] = a[i++]; // same as b[k]=a[i]; k++; i++;
}
else
{
b[k++] = a[j++];
}
}
while(i <= q)
{ b[k++] = a[i++];

05/28/2025 Slide No.


}
while(j <= r)
{
b[k++] = a[j++];
}
for(i=r; i >= p; i--)
{
a[i] = b[--k]; // copying back the sorted list to a[]
}
}
05/28/2025 Slide No.
Complexity Analysis of Merge Sort
● Merge Sort is quite fast, and has a time complexity of O(n*log n).
It is also a stable sort, which means the "equal" elements are
ordered in the same order in the sorted list.
● As we have already learned in Binary Search that whenever we
divide a number into half in every step, it can be represented using
a logarithmic function, which is log n and the number of steps can
be represented by log n + 1(at most)
● Also, we perform a single step operation to find out the middle of
any sub-array, i.e. O(1).

05/28/2025 Slide No.


Cont…
● And to merge the subarrays, made by dividing the original array
of n elements, a running time of O(n) will be required.
● Hence the total time for mergeSort function will become n(log n +
1), which gives us a time complexity of O(n*log n).

● Worst Case Time Complexity [ Big-O ]: O(n*log n)


● Best Case Time Complexity [Big-omega]: O(n*log n)
● Average Time Complexity [Big-theta]: O(n*log n)
● Space Complexity: O(n)

05/28/2025 Slide No.


Heap Sort
● Heap sort is a comparison based sorting technique based on Binary
Heap data structure.
● It is similar to selection sort where we first find the maximum element and
place the maximum element at the end. We repeat the same process for
the remaining elements.

05/28/2025 Slide No.


What is Binary Heap?
● Let us first define a Complete Binary Tree. A complete binary tree is a
binary tree in which every level, except possibly the last, is completely
filled, and all nodes are as far left as possible
● A Binary Heap is a Complete Binary Tree where items are stored in a
special order such that value in a parent node is greater(or smaller) than
the values in its two children nodes. The former is called as max heap
and the latter is called min-heap. The heap can be represented by a
binary tree or array.

05/28/2025 Slide No.


● Shape Property: Heap data structure is always a Complete Binary Tree,
which means all levels of the tree are fully filled.

05/28/2025 Slide No.


● Heap Property:
● All nodes are either greater than or equal to or less than or equal to each of its
children. If the parent nodes are greater than their child nodes, heap is called
a Max-Heap, and if the parent nodes are smaller than their child nodes, heap is
called Min-Heap.

05/28/2025 Slide No.


Why array based representation for Binary Heap?
● Since a Binary Heap is a Complete Binary Tree, it can be easily
represented as an array and the array-based representation is space-
efficient.
● If the parent node is stored at index I, the left child can be calculated by 2
* I + 1 and right child by 2 * I + 2 (assuming the indexing starts at 0).

05/28/2025 Slide No.


Heap Sort Algorithm for sorting in increasing order:
1. Build a max heap from the input data.
2. At this point, the largest item is stored at the root of the heap. Replace it with the
last item of the heap followed by reducing the size of heap by 1. Finally, heapify
the root of the tree.
3. Repeat step 2 while size of heap is greater than 1.

● How to build the heap?


Heapify procedure can be applied to a node only if its children nodes are
heapified. So the heapification must be performed in the bottom-up order.
● Lets understand with the help of an example: Input data: 4, 10, 3, 5, 1

05/28/2025 Slide No.


How to "heapify" a tree
● Starting from a complete binary tree, we can modify it to become a Max-
Heap by running a function called heapify on all the non-leaf elements
of the heap.
● Since heapify uses recursion, it can be difficult to grasp. So let's first
think about how you would heapify a tree with just three elements.
heapify(array)
Root = array[0]
Largest = largest( array[0] , array [2*0 + 1]. array[2*0+2])
if(Root != Largest)
Swap(Root, Largest)

05/28/2025 Slide No.


heapify

05/28/2025 Slide No.


Scenario-3
● The top element isn't a max-heap but
all the sub-trees are max-heaps.
● To maintain the max-heap property
for the entire tree, we will have to
keep pushing 2 downwards until it
reaches its correct position.

05/28/2025 Slide No.


Thus, to maintain the max-heap property
in a tree where both sub-trees are max-
heaps.
We need to run heapify on the root
element repeatedly until it is larger than
its children or it becomes a leaf node.

05/28/2025 Slide No.


Heapify Function
● void heapify(int arr[], int n, int i) { ● if (right < n && arr[right] >
● // Find largest among root, left child arr[largest])
and right child ● largest = right;
● int largest = i;
● int left = 2 * i + 1; ● // Swap and continue heapifying if
● root is not largest
int right = 2 * i + 2;
● if (largest != i) {
● swap(&arr[i], &arr[largest]);
● if (left < n && arr[left] >
arr[largest]) ● heapify(arr, n, largest);
● largest = left; ● }
● }

05/28/2025 Slide No.


Build max-heap
● To build a max-heap from any tree, we can thus start heapifying each sub-tree
from the bottom up and end up with a max-heap after the function is applied to
all the elements including the root element.
● In the case of a complete tree, the first index of a non-leaf node is given by n/2 -
1. All other nodes after that are leaf-nodes and thus don't need to be heapified.
● So, we can build a maximum heap as
● // Build heap (rearrange array)

for (int i = n / 2 - 1; i >= 0; i--)

heapify(arr, n, i);

05/28/2025 Slide No.


Create array and calculate i

05/28/2025 Slide No.


Steps to build max heap for heap sort

05/28/2025 Slide No.


05/28/2025 Slide No.
05/28/2025 Slide No.
Working of Heap Sort
1. Since the tree satisfies Max-Heap property, then the largest item
is stored at the root node.
2. Swap: Remove the root element and put at the end of the array
(nth position) Put the last item of the tree (heap) at the vacant
place.
3. Remove: Reduce the size of the heap by 1.
4. Heapify: Heapify the root element again so that we have the
highest element at root.
5. The process is repeated until all the items of the list are sorted.

05/28/2025 Slide No.


05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
Heap Sort
● // Heap sort
● for (int i = n - 1; i >= 0; i--) {
● swap(&arr[0], &arr[i]);

● // Heapify root element to get highest element at root again


● heapify(arr, i, 0);
● }

05/28/2025 Slide No.


Complexity Analysis of Heap Sort
● Worst Case Time Complexity: O(n*log n)
● Best Case Time Complexity: O(n*log n)
● Average Time Complexity: O(n*log n)
● Space Complexity : O(1)
● Heap sort is not a Stable sort, and requires a constant space for sorting a
list.
● Heap Sort is very fast and is widely used for sorting.

05/28/2025 Slide No.


● Best Case Complexity - It occurs when there is no sorting required, i.e.
the array is already sorted. The best-case time complexity of heap sort
is O(n logn).

● Average Case Complexity - It occurs when the array elements are in


jumbled order that is not properly ascending and not properly descending.
The average case time complexity of heap sort is O(n log n).

● Worst Case Complexity - It occurs when the array elements are


required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of heap sort is O(n log
n).
05/28/2025 Slide No.
Binary Search:
● Search a sorted array by repeatedly dividing the search interval in half.
● Begin with an interval covering the whole array.
● If the value of the search key is less than the item in the middle of the
interval, narrow the interval to the lower half. Otherwise narrow it to the
upper half.
● Repeatedly check until the value is found or the interval is empty.
● The idea of binary search is to use the information that the array is sorted
and reduce the time complexity to O(Log n).

05/28/2025 Slide No.


Example :

05/28/2025 Slide No.


Implementing Binary Search Algorithm
Following are the steps of implementation that we will be following:
● Step-1: Start with the middle element:
○ If the target value is equal to the middle element of the array, then return the index of the middle
element.

○ If not, then compare the middle element with the target value,

■ If the target value is greater than the number in the middle index, then pick the elements to
the right of the middle index, and start with Step 1.

■ If the target value is less than the number in the middle index, then pick the elements to
the left of the middle index, and start with Step 1.
● Step-2: When a match is found, return the index of the element matched.
● Step-3: If no match is found, then return -1

05/28/2025 Slide No.


Pseudo Code
Procedure binary_search
A ← sorted array
n ← size of array
x ← value to be searched
Set lowerBound = 1
Set upperBound = n
while x not found
if upperBound < lowerBound
EXIT: x does not exists.

05/28/2025 Slide No.


set midPoint =lowerBound+ ( upperBound - lowerBound ) / 2
if A[midPoint] < x
set lowerBound = midPoint + 1
if A[midPoint] > x
set upperBound = midPoint - 1
if A[midPoint] = x
EXIT: x found at location midPoint
end while
end procedure

05/28/2025 Slide No.


Time Complexity of Binary Search O(log n)
● Let's first understand what log2(n) means.
Expression: log2(n)
● For n = 2: For n = 8
log2(21) = 1 log 2(23) = 3
Output = 1 Output = 3

● For n = 4 For n = 256


log2(22) = 2 log 2(28) = 8
Output = 2 Output = 8

05/28/2025 Slide No.


Complexity Analysis of Binary Search
● Finding the given element:
Now to find 23, there will be many iterations with each having steps as mentioned in the
figure above:
● Iteration 1: Array: 2, 5, 8, 12, 16, 23, 38, 56, 72, 91
○ Select the middle element. (here 16)
○ Since 23 is greater than 16, so we divide the array into two halves and consider the
sub-array after element 16.
○ Now this sub-array with the elements after 16 will be taken into next iteration.

05/28/2025 Slide No.


CONT…
● Iteration 2: Array: 23, 38, 56, 72, 91
○ Select the middle element. (now 56)
○ Since 23 is smaller than 56, so we divide the array into two halves and consider the
sub-array before element 56.
○ Now this subarray with the elements before 56 will be taken into next iteration.
● Iteration 3: Array: 23, 38
○ Select the middle element. (now 23)
○ Since 23 is the middle element. So the iterations will now stop.

05/28/2025 Slide No.


Calculating Time complexity:
● Let say the iteration in Binary Search terminates after k iterations. In the above example,
it terminates after 3 iterations, so here k = 3
● At each iteration, the array is divided by half. So let’s say the length of array at any
iteration is n
● At Iteration 1,
Length of array = n
● At Iteration 2,
Length of array = n⁄2

05/28/2025 Slide No.


CONT…
● At Iteration 3,
Length of array = (n⁄2)⁄2 = n⁄22
● Therefore, after Iteration k, Length of array = n⁄2k
● Also, we know that after
After k divisions, the length of array becomes 1
Therefore Length of array = n⁄2k = 1 => n = 2k
● Applying log function on both sides:
=> log2 (n) = log2 (2k)
=> log2 (n) = k log2 (2)

05/28/2025 Slide No.


CONT…
● As (loga (a) = 1)
Therefore,
=> k = log2 (n)
● Hence, the time complexity of Binary Search is
log2 (n)

05/28/2025 Slide No.


Strassen’s Matrix Multiplication
● Given two square matrices A and B of size C[i][j] = 0;
n x n each, find their multiplication
matrix. for (int k = 0; k < N; k++)
{
● void multiply(int A[][N], int B[][N], int C[]
[N]) C[i][j] += A[i][k]*B[k][j];
}
{
}
for (int i = 0; i < N; i++)
}
{ }
for (int j = 0; j < N; j++)
{

05/28/2025 Slide No.


Divide and Conquer :
Following is simple Divide and Conquer method to multiply two square matrices.
● Divide matrices A and B in 4 sub-matrices of size N/2 x N/2 as shown in the below
diagram.
● Calculate following values recursively.

● ae + bg, af + bh, ce + dg and cf + dh.

05/28/2025 Slide No.


Example
● Array A => ● 8 8 8 8
● 1 1 1 1 ● 16 16 16 16
● 2 2 2 2 ● 24 24 24 24
● 3 3 3 3 ● 16 16 16 16
● 2 2 2 2

● Array B =>
● 1 1 1 1
● 2 2 2 2
● 3 3 3 3
● 2 2 2 2
● Result Array =>
05/28/2025 Slide No.
Complexity Analysis
● In the above method, we do 8 multiplications for matrices of size
N/2 x N/2 and 4 additions.
● Addition of two matrices takes O(N^2) time. So the time
complexity can be written as

● T(N) = 8T(N/2) + O(N^2)


● From Master's Theorem, time complexity of above method is
O(N^3)
● which is unfortunately same as the above naive method.

05/28/2025 Slide No.


Strassen’s Formula
● P1 = (a11 + a22) * (b11 + b22) ● C00= d1 + d4 – d5 + d7
● P2 = (a21 + a22)*b11 ● C01 = d3 + d5
● P3 = (b12 – b22)*a11 ● C10 = d2 + d4
● P4 = (b21 – b11)*a22 ● C11 = d1 + d3 – d2 – d6
● P5 = (a11 + a12)*b22 ● Here, C00, C01, C10, and C11
● P6 = (a21 – a11) * (b11 + b12) are the elements of the 2*2
matrix.
● P7 = (a12 – a22) * (b21 + b22)

05/28/2025 Slide No.


Strassen’s Formula for Multiplication

05/28/2025 Slide No.


● p5+p4-p2+p6 = (a+d)*(e+h) + d*(g-e) - (a+b)*h + (b-d)*(g+h)
= (ae+de+ah+dh) + (dg-de) - (ah+bh) + (bg-dg+bh-dh)
= ae+bg
● p1+p2 = a*(f-h) + (a+b)*h
= (af-ah) + (ah+bh)
= af+bh
● p3+p4 = (c+d)*e + d*(g-e)
= (ce+de) + (dg-de)
= ce+dg
● p1+p5-p3-p7 = a*(f-h) + (a+d)*(e+h) - (c+d)*e - (a-c)*(e+f)
= (af-ah) + (ae+de+ah+dh) -(ce+de) - (ae-ce+af-cf)
= cf+dh

05/28/2025 Slide No.


Time Complexity of Strassen’s Method

● Addition and Subtraction of two matrices takes O(N2) time. So time complexity
can be written as

● T(N) = 7T(N/2) + O(N2)

● From Master's Theorem, time complexity of above method is


● O(N^Log7) which is approximately O(N^2.8074)

05/28/2025 Slide No.


Algorithm Strass(n, x, y, z)
● Strass ( n/2, a11, b10 – b00, d4)
● begin ● Strass ( n/2, a00 + a01, b11, d5)
● If n = threshold then compute ● Strass (n/2, a10 – a00, b00 + b11, d6)
● C = x * y is a conventional matrix. ● Strass (n/2, a01 – a11, b10 + b11, d7)
● Else
● Partition a into four sub matrices a00, ● C = d1+d4-d5+d7 d3+d5
a01, a10, a11. ● d2+d4 d1+d3-d2-d6
● Partition b into four sub matrices b00,
b01, b10, b11.
● end if
● Strass ( n/2, a00 + a11, b00 + b11, d1)
● return (C)
● Strass ( n/2, a10 + a11, b00, d2)
● end.
● Strass ( n/2, a00, b01 – b11, d3)

05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
<Session Name>: Source
https://www.geeksforgeeks.org/

05/28/2025 Slide No.


Time for a Break !

05/28/2025 Slide No.


Any Doubts/Questions

05/28/2025 Slide No.


Thank You

05/28/2025
Icons To Be Used (Suggestions Only)

Hands on
Doubts/ Tools Exercise
Questions

Coding Test Your


Reference
Standards Understanding

A Welcome Contacts
Demonstration Break

05/28/2025 Slide No.

You might also like