Design & Analysis of algorithm- 1 & 2
Design & Analysis of algorithm- 1 & 2
● Analyzing algorithms
● Complexity of algorithms
● Growth of functions
● Shell sort
● Quick sort
● Merge sort
● Heap sort
Or
A procedure for solving a mathematical problem in a finite number of
steps that frequently involves recursive operations.
Ex. An algorithm to add two numbers:
1. Take two number inputs
2. Add numbers using the + operator
3. Display the result
05/28/2025 Slide No.
05/28/2025 Slide No.
Use of the Algorithms:-
1) Computer Science: simple sorting and searching to complex tasks
2) Mathematics: optimal solution to a system of linear equations or finding the
shortest path in a graph.
3) Operations Research: Transportation, logistics, and resource allocation.
4) Artificial Intelligence: image recognition, natural language processing, and
decision-making.
5) Data Science: Extract insights from large amounts of data in fields such as
marketing, finance, and healthcare.
print(maximum(a,b,c))
Analyzing algorithms
● Analysis of algorithms is the determination of the amount of time and space
resources required to execute it.
● Usually, the efficiency or running time of an algorithm is stated as a function
relating the input length to the number of steps, known as time complexity,
or volume of memory, known as space complexity.
● Analysis of algorithms is the determination of the amount of time and space
resources required to execute it.
● Asymptotic notations are used to write fastest and slowest possible running time for
an algorithm. These are also referred to as 'best case' and 'worst case' scenarios
respectively.
● "In asymptotic notations, we derive the complexity concerning the size of the input.
(Example in terms of n)"
● "These notations are important because without expanding the cost of running the
algorithm, we can estimate the complexity of the algorithms.“
● Asymptotic Notation is a way of comparing function that ignores constant factors and
small input sizes.
● The above expression can be described as a function f(n) belongs to the set O(g(n)) if
there exists a positive constant c such that it lies between 0 and cg(n), for sufficiently
large n.
● For any value of n, the running time of an algorithm does not cross the time provided
by O(g(n)).
.
Step 8 -> final expression => k < log 2 N
kth 2k-1
T (n) = Θ
● Example:
T (n) = 8 T apply master theorem on it.
● Solution:
Compare T (n) = 8 T with
T (n) = a T
a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3
05/28/2025 Slide No.
Cont…
● Put all the values in: f (n) =
● 1000 n2 = O (n3-ε )
● If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2)
● Since this equation holds, the first case of the master theorem applies to the given
recurrence relation, thus resulting in the conclusion:
● T (n) = Θ Therefore: T (n) = Θ (n3)
T (n) = 2
● We will use the original sequence of shell sort, i.e., N/2, N/4,....,1 as the intervals.
● In the first loop, n is equal to 8 (size of the array), so the elements are lying at the
interval of 4 (n/2 = 4). Elements will be compared and swapped if they are not in
order.
● Here, in the first loop, the element at the 0 th position will be compared with the
element at 4th position. If the 0th element is greater, it will be swapped with the
element at 4th position. Otherwise, it remains the same. This process will continue for
the remaining elements.
● In the second loop, elements are lying at the interval of 2 (n/4 = 2), where n = 8.
● Now, we are taking the interval of 2 to sort the rest of the array. With an interval of 2,
two sublists will be generated - {12, 25, 33, 40}, and {17, 8, 31, 42}.
● In the third loop, elements are lying at the interval of 1 (n/8 = 1), where n = 8. At last,
we use the interval of value 1 to sort the rest of the array elements. In this step, shell
sort uses insertion sort to sort the array elements.
● O(1) if n is small
● T(n) = f1(n) + 2T(n/2) + f2(n)
while True do
while A[++leftPointer] < pivot do
//do-nothing
end while
end while
end function
● Is QuickSort In-place?
As per the broad definition of in-place algorithm it qualifies as an
in-place sorting algorithm as it uses extra space only for storing
recursive function calls but not for manipulating the input.
heapify(arr, n, i);
○ If not, then compare the middle element with the target value,
■ If the target value is greater than the number in the middle index, then pick the elements to
the right of the middle index, and start with Step 1.
■ If the target value is less than the number in the middle index, then pick the elements to
the left of the middle index, and start with Step 1.
● Step-2: When a match is found, return the index of the element matched.
● Step-3: If no match is found, then return -1
● Array B =>
● 1 1 1 1
● 2 2 2 2
● 3 3 3 3
● 2 2 2 2
● Result Array =>
05/28/2025 Slide No.
Complexity Analysis
● In the above method, we do 8 multiplications for matrices of size
N/2 x N/2 and 4 additions.
● Addition of two matrices takes O(N^2) time. So the time
complexity can be written as
● Addition and Subtraction of two matrices takes O(N2) time. So time complexity
can be written as
05/28/2025
Icons To Be Used (Suggestions Only)
Hands on
Doubts/ Tools Exercise
Questions
A Welcome Contacts
Demonstration Break