0% found this document useful (0 votes)
26 views120 pages

DAA Lecture Handouts

Uploaded by

akbarmaria2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views120 pages

DAA Lecture Handouts

Uploaded by

akbarmaria2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 120

Design & Analysis of

Algorithms
Why We Study DAA
▪ Important subject of CS

▪ Core subject

▪ Underlying architecture of many business organizations


▪ Importance regarding job placements

▪ Examples
▪ Google search engine
▪ Facebook
▪ YouTube
▪ Google Maps
Important Topics of DAA
• Asymptotic Notations
• Big Oh, Big Thetha, Big Omega, Little oh, little omega
• Time and Space complexity
• Divide and conquer Algorithms
• Binary search
• Quick sort
• Merge sort
• Sorting Algorithms
• Selection sort
• Insertion sort
• Bubble sort
Important Topics of DAA (cont..)
• Heap Trees
• Max heap
• Min heap

• Greedy Methods
• Knapsack
• Huffman encoding
• Job sequencing
• Dijkstra algo
• Min spanning tree
• Prims algorithm
• Kruskal algorithm
Important Topics of DAA (cont..)
• Graph Traversal
• Depth First Search (DFS)
• Breadth First Search (BFS)
• Dynamic Programming
• Multistage graph
• Travelling Salesman Problem (TSP)
• Optimal binary search tree
• Bellman Ford Algorithm for shortest path
• Hashing
• Polynomial , Non polynomial (NP), NP- hard, NP-complete
Properties of Algorithm
• An algorithm is a finite set of precise instructions for performing a computation
or for solving a problem.

Properties of algorithms:

Input:
An algorithm has input values from a specified set.

Output:
From each set of input values an algorithm produces output values from a
specified set.
The output values are the solution to the problem.
Properties of Algorithm (cont...)
Definiteness:
The steps of an algorithm must be defined precisely.
Correctness:
An algorithm should produce the correct output values for each set of input values.
Finiteness:
An algorithm should produce the desired output after a finite number of steps for any input
in the set.
Effectiveness:
It must be possible to perform each step of an algorithm exactly and in a finite amount of
time.
Generality:
The procedure should be applicable for all problems of the desired form, not just for a
particular set of input values.
Algorithms
EXAMPLE :
Describe an algorithm for finding the maximum (largest) value in a finite
sequence of integers.

• Even though the problem of finding the maximum element in a sequence is


relatively trivial
• it provides a good illustration of the concept of an algorithm.

• there are many instances where the largest integer in a finite sequence of
integers is required.
Algorithms
Solution of Example:
We perform the following steps.
1. Set the temporary maximum equal to the first integer in the sequence.

2. Compare the next integer in the sequence to the temporary maximum,


and if it is larger than the temporary maximum, set the temporary maximum
equal to this integer.

3. Repeat the previous step if there are more integers in the sequence.

4. Stop when there are no integers left in the sequence. The temporary maximum at this
point is the largest integer in the sequence.
Algorithm
Algorithms
Example:
• Show that Algorithm for finding the maximum element in a finite sequence of integers
has all the properties listed (properties of algorithm).
Solution:
• The input to Algorithm is a sequence of integers.

• The output is the largest integer in the sequence.

• Each step of the algorithm is precisely defined, because a finite loop, and
conditional statements occur

• To show that the algorithm is correct, we must show that when the
algorithm terminates.
Cont..
• The algorithm uses a finite number of steps, because it terminates after all the
integers in the sequence have been examined (finiteness).

• The algorithm can be carried out in a finite amount of time because each step is
either a comparison or an assignment (effectiveness).

• Defined algorithm is general, because it can be used to find the maximum of any finite
sequence of integers (Generality).
How to Analyze Algorithm
• An algorithm is finite set of instruction to solve a problem.

• Analysis is a process of comparing two or more algorithms w.r.t time


or space

• Analysis can be perform in two ways:


• Priori
• Posterior

• Priori analysis is analysis before execution of algorithm


• Posterior analysis is analysis after the execution of algorithm
Priori & Posterior Analysis
• Priori analysis is hardware independent.
• Posterior analysis is hardware dependent.
int main()
{ step1- Read A
int a, b, sum;
step 2- Read B
cout << "Enter two integers: ";
cin >> a >> b;
step 3- sum A + B
sum = a + b;
cout << a << " + " << b << " = " << sum; step 4- Print (sum)
return 0;
}
Asymptotic Notations

• Generally Asymptotic notations are used for the analysis of algorithm


(analyze algorithm running time)

➢Asymptotic notations include:


▪ Big-θ (Big-Theta)
▪ Big-O (Big-oh)
▪ Big- Ω (Big- omega)
▪ little-o (small- oh)
▪ little- ω (small-omega)
Asymptotic Notations
Big-O: worst case running time of algorithm (gives upper
bound))

O(g(n)) = {f(n) :
 positive constants c and n0, such
that n  n0,
we have 0  f(n)  cg(n) }
Asymptotic Notations
Big-Ω : best case running time of algorithm (gives lower bound)

(g(n)) = {f(n) :
 positive constants c and n0, such
that n  n0,
we have 0  cg(n)  f(n)}
Asymptotic Notations Upper
bound
Big-θ (Big-Theta)

• Upper bound represents worst case

• Lower bound represents best case

• C1, c2 and n0 are some positive constants:


Lower
bound

• We can say that ftn is greater than lower bound and lower than upper bound.
Mathematically we can write this as:

• Θ(g(n))= {f(n): 0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n) for all n ≥ n0}


Analysis of Algorithm
Worst case analysis:

➢In analysis of algorithm we usually concentrate on finding only worst case


running time for any input of size n because:

▪ worst case running time of an algorithm gives us an upper bound on


running time for any input.
▪ It provides a guarantee that algorithm will never take any longer.
little-o (small- oh) & little- ω (small-omega)
• Little – o is loose upper bound & little ω is loose lower bound.
Design & Analysis of
Algorithm
Analysis of Algorithm
Time complexity graph:
Big O
O(g(n)) = {f(n) :
 positive constants c and n0, such
that n  n0,
we have 0  f(n)  cg(n) }

F(n)= 2n2 + n
F(n)  C. g(n)
F(n) = O g (n)
2n2 + n  C. g ( ? )
2n2 + n  C. g (n2)
Finding upper bond
➢Find upper bond of n2+ 2n+ 1

We know for upper bond f(n) <= cg(n)

n2+ 2n + 1 <= n2 + 2n2 + 1n2


n2+ 2n+ 1 <= 4 n2
c= 4
Note 2n < 4n2 and 1< 4n2 for all n> 1

Thus n2+ 2n+ 1 < 4n2


so O(n) = n2
Divide & Conquer

A strategy to solve large number of problems. It has following stages:


❑Divide: divide the problem into small number of pieces
❑Conquer: Solve each piece by applying divide and conquer
recursively
❑Combine: Rearrange the pieces
Merge Sort

➢Merge sort is an example of divide and conquer technique

• Divide split A down the middle into subsequences, each of size n/2
• Conquer : sort each subsequence by calling merge sort recursively
• Merge : the two sorted sequences into a single sorted list
Merge Sort
• It use recursive divide and conquer approach
Design & Analysis of
Algorithm
Merge Sort Example
7 5 2 4 1 8 3 0

7 5 2 4 1 8 3 0

77 55 22 44 1 8 3 0

7 5 2 4 1 8 3 0
Merge Sort Example (cont…)
0 1 2 3 4 5 7 8

2 4 5 7 0 1 3 8

57 57 22 44 1 8 0 3

7 5 2 4 1 8 3 0
Algorithm Merge-sort

• Merge-sort(array A, int p, int r)


• If (p<r)
• then
• q ‹— (p+r)/2
• Merge-sort(A,p,q) //sort A[p…q]
• Merge-sort(A,q+1,r)//sort A[q+1..r]
• Merge(A,p,q,r)
Merge Algorithm
• merge(array A, int p, int q,int r)
1. int B[p…r];
2. int i =k =p;
3. int j =q+1
4. while(i<=q) and (j<=r)
5. do if(A[i] <= A[j])
6. then B[k++] =A[i++]
7. else B[k++] =A[j++]
8. while(i<=q)
9. do B[k++] =A[i++]
10. while(j<=r)
11. do B[k++] = A[j++]
12. for i = p to r
13. do A[i] ‹— B[i]
Finding Merge Sort Time Complexity
• Assume we have n length array
n length array

T(1) = 1, its base case


if value is greater then 1 then
n/2
n/2 we put it in following eq.
T(n)= T(n/2) + T(n/2) + n
T (n/2) T(n/2)

T(n)= T(n/2) + T(n/2) + n


Finding Merge Sort Time Complexity (cont…)
For n =2 For n =4
T(n)= T(n/2) + T(n/2) + n T(n)= T(n/2) + T(n/2) + n
Put value of n in eq. Put value of n in eq.

T(4) = T(4/2) + T(4/2) + 4


T(2) = T(2/2) + T(2/2) + 2
T(4) = T(2) + T(2) + 4
T(2) = T(1) + T(1) + 2
T(4) = 4 + 4 +4
T(2) = 1 + 1 +2
T(4) = 12
T(2) = 4
Finding Merge Sort Time Complexity (cont…)

For n = 8 For n = 16
T(n)= T(n/2) + T(n/2) + n T(n)= T(n/2) + T(n/2) + n
Put value of n in eq. Put value of n in eq.

T(8) = T(8/2) + T(8/2) + 8 T(16) = T(16/2) + T(16/2) + 16


T(8) = T(4) + T(4) + 8 T(16) = T(8) + T(8) + 16
T(8) = 12 + 12 + 8 T(16) = 32 + 32 + 16
T(8) = 32 T(16) = 80
Finding Merge Sort Time Complexity (cont…)
T(1) = 1
T(2) = 4
T(4) = 12
T(8) = 32
T(16) = 80

➢There is no i/p and o/p pattern in above results. If we divide both sides by i/p
T(1)/1 = 1/1 = 1 T(1)/1 = 1
T(2) /2 = 4/2 = 2 T(2) /2 = 2
T(4)/4 = 12/4 = 3 T(4)/4 = 3
T(8)/8 = 32/8 = 4 T(8)/8 = 4
T(16)/16 = 80/16 = 5 T(16)/16 = 5
Finding Merge Sort Time Complexity (cont…)

If we take log2 of input then


T (n)/n = G2log2 n + 1
log2 1 = 0 + 1 = 1 We only concerned with T(n)
T(1)/1 = 1

G2log2 2 = 1 + 1 = 2 Multiply both sides with “n”


T(2) /2 = 2

T(4)/4 = 3 G2log2 4 = 2 + 1 = 3 n .T (n)/n = n(log2 n + 1)


T(8)/8 = 4 G2log2 8 = 3 + 1 = 4
T (n) = n.log2 n + n
T(16)/16 = 5 G2log2 16 = 4 + 1 = 5
.
.
.
T (n) = O (n.log2 n)
T (n)/n = G2log2 n + 1
Quick Sort
• Quick sort is also divide and conquer approach like merge sort
• We will take first element as a pivot, second element as pointer “p”,
last element as pointer “q”.

40 55 20 30 80 25 95 50

• In above case pivot= 40


• Pointer p= 55 (p move toward RHS)
• Pointer q= 50 (q move toward LHS)
40 55 20 30 80 25 95 50
pivot p q
Quick Sort (cont…)
• p will stop when it find element greater than pivot.
• q will stop when it find element less than pivot.
40 55 20 30 80 25 95 50
pivot p q

40 55 20 30 80 25 95 50
pivot p q

40 55 20 30 80 25 95 50
pivot p q

Interchange
40 25 20 30 80 55 95 50
value of p and
pivot p q q
Quick Sort (cont…)
40 25 20 30 80 55 95 50
pivot p q

40 25 20 30 80 55 95 50
pivot p q

40 25 20 30 80 55 95 50
pivot q p

When p and q
cross each other
30 25 20 40 80 55 95 50
Replace pivot pivot
with q
Quick Sort (cont…)
30 25 20 40 80 55 95 50
pivot

30 25 20 40 80 55 95 50
pivot p q pivot p q

30 25 20 ∞ 40 80 55 95 50
pivot p q pivot p q

30 25 20 ∞ 40 80 55 50 95
pivot q p pivot p q

20 25 30 ∞ 40 80 55 50 95
pivot q p pivot q p
Quick Sort (cont…)

20 25 30 ∞ 40 80 55 50 95
pivot q p pivot q p

20 25 30 ∞ 40 50 55 80 95
pivot q p pivot q p

In this case our recurrence equation will be:


T(n)= T(n/2) + T(n/2) + n
if we solve above equation as we solve in merge sort than our time
complexity will be : O (n.log n) (its complexity for best and average case)
Quick Sort (worst case complexity)
• Suppose we have sorted array but algorithm do not know it.
• Algorithm have to do its complete procedure to check it out.

5 10 15 20 25 30 35
p
pivot q

5 10 15 20 25 30 35
pivot p
q
When p and q
cross each other
5 10 15 20 25 30 35
Replace pivot
pivot p with q
q
Quick Sort (worst case complexity) cont…
5 10 15 20 25 30 35
pivot p
q

5 10 15 20 25 30 35 n-1

pivot p q

When p and q
5 10 15 20 25 30 35 cross each other
Replace pivot
pivot p with q
q

5 10 15 20 25 30 35 n-2
pivot p q
Quick Sort (worst case complexity) cont…

5 10 15 20 25 30 35 n-2
pivot p q

So, our equation for worst case will be: Put value of T(n-2) from eq.4 in eq.3
n-1 = 7-1 = 6
T(n)= T (n-1) + n …………eq.1 T(n)= T(n-3)+(n-2) + (n-1) + n
n-2 = 7-2 = 5
Substitute n with n-1 in eq.1
n-3 = 7-3 = 4 .
n-4 = 7-4 = 3 T(n-1)= T (n-1)-1 + (n-1)
n-5 = 7-5 = 2 T(n-1)= T(n-2) + (n-1)……eq.2 .
n-6 = 7-6 = 1 Put value of T(n-1) from eq.2 in eq.1 T(n) = n( n+ 1)/2
n-7 = 7-7 = 0 T(n) = T (n-2)+(n-1) +n…...eq.3 = n2 + n/2
Derive the value of T(n-2) from eq.1
= n2/2 + n/2
T(n-2)= T (n-2)-1 + (n-2)
T(n-2)= T (n-3) + (n-2)……..eq.4 T(n) = O (n2 )
Binary Search Algorithm
Binary Search Algorithm
• This algorithm can be used when the list has terms occurring in order of
increasing size.

I. Compare x with the middle element.

II. If x matches with middle element, we return the mid index.

III. Else If x is greater than the mid element, then x can only lie in right half
subarray after the mid element. So we recur for right half.

IV. Else (x is smaller) recur for the left half.


Binary Search(working with example)
• Suppose we have to search 30.
• We have x=30

5 10 15 20 25 30 35 40 45 50

5 10 15 20 25 30 35 40 45 50

30 35 40 45 50

30 35

30 35
Binary Search Algorithm Analysis
• First we need to write equation of binary search algorithm.
T(n)= T (n /2) + b …………1st iteration T(n) = T (n/ 2k ) + kb….. last iteration
T(n)= T (n / 22 ) + b + b ……. 2nd iteration T(n) = T (1) + Kb
T(n)= T (n / 23 ) + b + b + b ……. 3rd iteration T(n) = a + kb ……eq.1
T(n)= T (n / 24 ) + b + b + b + b ……. 4th iteration
We have n = 2k By putting the value of k in
eq.1
If we take log both sides. T(n) = a + b log2 n
T(n) = T (n / 24 ) + 4b……...
log2 n = log2 2k T(n) = log2 n
.
= k log2 2 T(n)= O (log2 n)
.
= k (1)
T(n) = T (n/ 2i ) + ib….. nth iteration.
Insertion sort
1. For j= 2 to n
2. Key =A[2]
3. i= j-1
4. While i>0 and A[i] > key
5. A[i+1]=A[i]
6. i=i-1
7. A[i+1] =key
Insertion Sort Analysis
Insertion Sort Problem
Worst Case

= ( n 2 )
OR
= ( n 2 )
Bubble Sort
• Sorting is ordering the elements of a Algorithm Bubble Sort
list.

• Sorting is putting these elements into begin BubbleSort(list)


a list in which the elements are in for all elements of list
increasing order.
if list[i] > list[i+1]
• sorting the list 7, 2, 1, 4, 5, 9 swap(list[i], list[i+1])
produces the list 1, 2, 4, 5, 7, 9. end if
end for
• Sorting the list d, h, c, a, f (using
alphabetical order) produces the list return list
a, c, d, f , h. end BubbleSort
Bubble Sort working
11 10 12 8 16 4
• Consider 11 as a bubble, i =bubble position
11 10 12 8 16 4
• If i is greater than i+1 then swap (in our case 11>10 so we swap)
10 11 12 8 16 4

• If i is not greater than i+1 , then i+1 will be consider as bubble


10 11 12 8 16 4
• Now bubble (i= 12), If i is greater than i+1 then swap (in our case 12>8 so we swap)

10 11 8 12 16 4
Bubble Sort working
• Now bubble (i= 12), If i is greater than i+1 then swap (in our case 12>8 so we swap)

10 11 8 12 16 4
• If i is not greater than i+1 , then i+1 will be consider as bubble (in our case 12 is not greater then 16, so 16
will be bubble)

10 11 8 12 16 4

• Now bubble (i= 16), If i is greater than i+1 then swap (in our case 16>4 so we swap)

10 11 8 12 4 16

➢Here our 1st pass completes, at the end of first pass, the largest element of
array or list should be at the end of list.
Bubble Sort working
• Now we continue 2nd pass by considering bubble (i= 10), If i is greater than i+1 then swap else consider i+1 as a
bubble.
10 11 8 12 4 16

10 11 8 12 4 16
• In our case 10< 11 so 11 will consider as bubble
10 11 8 12 4 16
• Now 11 >8 so swap
10 8 11 12 4 16
• Now 11< 12 so 12 will be consider as bubble
10 8 11 12 4 16
Here 2nd pass complete
and at the end of 2nd
• Now 12 >4 so pass the second largest
we swap 10 8 11 4 12 16 element of list will be its
final position
Bubble Sort Time complexity (worst case)
• In our example we had 6 elements in array. (n=6)
• To complete our 1st pass we made 5 comparisons. (element left n-1)
11 10 12 8 16 4

10 11 8 12 4 16
To complete our 2nd pass we made 4 comparisons. (element left n-2)

10 8 11 4 12 16

T(n) = n (n-1) /2
= n2 - n/2
For worst case we will consider higher order value
T(n) = O (n2 )
Greedy Approach/Algorithm
• Greedy algorithms are designed to achieve optimum solution for a given
problem.
• In greedy algorithm approach, decisions are made from the given solution
domain.
• Follow local optimal choice at each state with intend of finding global
optimum.
• For example we have currency notes of Rs. 100, 50, 20, 10
• Total Rs. 180 (here global optimum is to collect Rs.180)
• If we use greedy approach then 1st we will pick Rs. 100 note
• Secondly we pick Rs. 50
• Third we pick RS.20
• Fourth we pick Rs. 10
• In this way we will collect RS.180, total in four steps.
Knapsack problem
• Knapsack problem use greedy technique/approach to find optimal solution.
Object Obj 1 Obj 2 Obj 3 Knapsack
capacity
profit 25 24 15 M= 20
weight 18 15 10
3) greedy for both profit & weight :
1) greedy for profit: 2) greedy for weight:
Here we use proportion of P/W
pick object 3 as its weight is 10
pick object 1 as its profit is 25 and its
weight is 18 P/W of obj 1: 25/18= 1.3
10
2 10 P/W of obj 2: 24/15= 1.6
18 Obj 3 Profit = 15
Profit = 25 P/W of obj 3: 15/10 = 1.5
Obj 1
pick object 2, as its weight is 15 pick object 2, as its P/W is 1.6
pick object 2, as its profit is 24 and its
weight is 15 10 5
Profit = (10/15) * 24 = 16 Profit =
15 5
2 Profit = (2/15) * 24 = 3.2 10 (5/10) *
Obj 2 Profit = 24 15
Obj 2 Obj 3 15= 7.5
Obj 2
pick object 3, as its P/W is 1.5
18 Total = 25 + 3.2= 28.2 Total = 15 + 16 = 31
Obj 1 Total = 24 + 7.5 = 31.5
Knapsack Algorithm Time Complexity
• Knapsack problem use greedy technique/approach to find optimal solution.
Object Obj 1 Obj 2 Obj 3 Knapsack
capacity
profit 25 24 15 M= 20
weight 18 15 10 Knapsack Algorithm
greedy for both profit & weight : For i= 1 to n n times = O(n)
calculate profit/weight
Here we use proportion of P/W
Sort object in decreasing order of P/W ratio O(n log n)
P/W of obj 1: 25/18= 1.3
For i= 1 to n
P/W of obj 2: 24/15= 1.6
if M>0 & Wi<= M then
P/W of obj 3: 15/10 = 1.5
M= M-Wi
pick object 2, as its P/W is 1.6
P = P + Pi T(n)= O(n log n)
5
5 Profit = else break;
15 (5/10) *
Obj 2 Profit = 24 15
Obj 2
15= 7.5 if M> 0 n times= O (n)
pick object 3, as its P/W is 1.5 P = P + Pi (M/Wi)
Total = 24 + 7.5 = 31.5
Job Sequencing Problem
• Job sequencing problem use greedy technique/approach to find optimal
solution.

• Given an array of jobs where every job has a deadline and associated
profit.
• if the job is finished before the deadline.

• It is also given that every job takes single unit of time, so the minimum
possible deadline for any job is 1.

• How to maximize total profit if only one job can be scheduled at a time?

• It use non preemptive method.


• If one job starts then next job have to wait its completion
Job Sequencing Problem (Example)

Job 1 Job 2 Job 3 Job 4 Job sequencing Algorithm


Profit 60 25 15 30 Arrange all jobs in decreasing order of
2 1 2 1
profit (n log n)
deadline

For each job (m), do linear search to


job 1 find particular slot in an array of size
0 1 2 (n)
0 1 2
n= maximum deadline

Job 4 job 1
m= no. of jobs
Job 1 job 3
0 1 2 (so we search array m X n times. If we
0 1 2
have same deadlines and if no. of job
and deadline are equal then we have n
X n= n2
profit = 60 + 15 = 75 profit = 60 + 30 = 90
T(n) = O (n2 )
Huffman Coding
• Huffman coding is greedy approach.

• It is data compression method.

• It used for lossless data compression.

• assigns codes to characters such that the length of the code depends on
the relative frequency or weight of the corresponding character.

• Character with higher frequency will have shorter code and character with
lower frequency will have longer code.
Why we need Huffman coding?
• We will understand the need of Huffman coding with example.

• First we discuss ASCII (American Standard Code for Information


Interchange) coding.

• There are total 128 characters in standard ASCII coding.

• We can represent them from 0-127

• If we have 128 characters, it means we need minimum 7 bits to


represent single character. (27 = 128)
No. of bits require to represent message using ASCII coding.

• Suppose we have message of 100 characters. (M=100)


• In our message frequency of characters are following
• A= 40
• B=30
• C= 15
• D= 10
• E= 2
• F= 3
• Total message= 100 characters
• As we say we nee 7 bits to represent 1 character in ASCII coding, So,
for this particular message we need; 100 X 7 = 700 bits.
Huffman Coding Algorithm Working
100
Total msg length = 100 char No. of bits require for each
0 1
character:
A= 40
A (40) 60
A= 0 = 1 bit = 40 X 1 = 40
B= 30 0 1
B= 10 = 2 bits = 30 X 2 = 60
C= 15 B (30) 30

0 1 C= 110 = 3bits = 15 X 3 = 45
D= 10
D= 1111 = 4 bits = 10 X4 =40
E= 2 C (15) 15
E= 11100 = 5 bits= 2 X 5= 10
F= 3 0 1
T(n)= O (nlogn) F= 11101= 5 bits= 3 X 5= 15
5 D (10)
Huffman coding require less storage and less Total bits require to represent
transmission time. 0 1 100 character message= 210

Transmission time = Message size/BW E (2) F (3)


Spanning Tree
• Spanning tree for a graph G (V,E ), is a subgraph of G which contains all
the vertices of G.
• We may also say, spanning tree (S) is connected subgraph of Graph (G)
if:
a) ‘S’ should contain all vertices of G B
A
b) ‘S’ should contain (|V| - 1 ) edges
S4 E

D C A B

A B A B E
S3
G E S1 E
D C
A B
D C D C
S2 E

D C
Spanning Tree (continue)

• Multiple spanning trees can be constructed from a single graph.


• To find out how many spanning trees can be constructed from single
graph, we can use following formula:
• No. of spanning trees = nn-2 ( where ‘n’ is number of vertices in Graph )
• Above formula can only be use if we have complete graph otherwise we use
Kirchhoff's theorem)

A B
a complete graph is a simple undirected graph in
which every pair of distinct vertices is connected by
a unique edge OR we can use following formula to
check that graph is complete or not:
V(V-1)/2 edges
D C
Minimum Cost Spanning Tree
• A spanning tree of a weighted graph (G) with minimum cost is called ‘minimum
cost spanning tree’

2 2
A B A B 8 A B
8
Weighted E 4 E
4 E 4
Graph

10 D C 10 D C 10
D C 5
5 5
Spanning tree 1
Spanning tree 2
cost= 27
2 cost= 21
A B 8

4 E

D C
5
Spanning tree 3
cost= 19
Prim’s Algorithm
• Prim’s algorithm is a greedy algorithm.
• It used to find the Minimum Spanning Tree (MST) from given
weighted undirected graph.
• To find out MST we apply following steps on given weighted graph.
Step 1: Remove all loops
Step 2: Remove all parallel edges between two Nodes (vertices)
• In case of parallel edges, keep the one which has the least cost (weight)
Step 3: Choose any arbitrary node as root node
Step 4: Check outgoing edges and select the one with less cost(weight)
(repeat this step for all nodes)
• There should be no cycle
Prim’s Algorithm (Example)
Find minimum cost spanning tree of given weighted graph by using
Prim’s algorithm

5
12 8 18
A B C
13
15
7 2
17 G

1 10
D E F
9 6
10
Prim’s Algorithm (Example): step 1

Step 1: Remove all loops


Step 2: Remove all parallel edges between two
Nodes (vertices)
Step 3: Choose any arbitrary node as root node
Step 4: Check outgoing edges and select the one
with less cost(weight)
Prim’s Algorithm (Example): step 2

Step 1: Remove all loops


Step 2: Remove all parallel edges between two
Nodes (vertices)
Step 3: Choose any arbitrary node as root node
Step 4: Check outgoing edges and select the one
with less cost (weight)
Prim’s Algorithm (Example): step 3

Step 1: Remove all loops


Step 2: Remove all parallel edges between two
Nodes (vertices)
Step 3: Choose any arbitrary node as root node
Step 4: Check outgoing edges and select the one
with less cost (weight)
Prim’s Algorithm (Example): step 4

Step 4: Check outgoing edges and select


the one with less cost (repeat this step for
all nodes),There should be no cycle
Prim’s Algorithm (Example): step 4 cont..
Prim’s Algorithm (Example): step 4 cont..

Min. cost =
12+9+6+1+7+2+10= 47
Prim’s Algorithm Time Complexity

• The time complexity of the Prim’s Algorithm is O ((V + E ) log V) because


each vertex is inserted in the priority queue only once and insertion in
priority queue take logarithmic time.
Kruskal’s Algorithm
• Kruskal’s algorithm is a greedy algorithm.
• It used to find the Minimum Spanning Tree (MST) from given
weighted undirected graph.
• To find out MST we apply following steps on given weighted graph.
Step 1: Remove all loops
Step 2: Remove all parallel edges between two Nodes (vertices)
• In case of parallel edges, keep the one which has the least cost (weight)
Step 3: Arrange all edges in increasing order of weight (cost)
Step 4: Add the edges which has less weight (cost) (repeat this step for
all nodes)
• There should be no cycle
Kruskal’s Algorithm (Example)
Find minimum cost spanning tree of given weighted graph by using
Kruskal’s algorithm

5
12 8 18
A B C
13
15
7 2
17 G

1 10
D E F
9 6
10
Kruskal’s Algorithm (Example): step 1
Step 1: Remove all loops
Kruskal’s Algorithm (Example): step 2
Step 2: Remove all parallel edges between two Nodes (vertices)
• In case of parallel edges, keep the one which has the least cost
Kruskal’s Algorithm (Example): step 3
Step 3: Arrange all edges in increasing order of weight

1, 2, 6, 7, 8, 9, 10, 12, 13, 17


Kruskal’s Algorithm (Example): step 4
Step 4: Add the edges which has less weight (cost) (repeat this step for all nodes)
There should be no cycle

1, 2, 6, 7, 8, 9, 10, 12, 13, 17


Kruskal’s Algorithm (Example): step 4 cont..
1, 2, 6, 7, 8, 9, 10, 12, 13, 17
Kruskal’s Algorithm (Example): step 4 cont..
1, 2, 6, 7, 8, 9, 10, 12, 13, 17

Min. cost = 12+ 9+ 6+ 1+ 7+ 2 = 37


Kruskal’s Algorithm Time Complexity

• In Kruskal’s Algorithm, most time consuming operation is sorting


because the total complexity of the Disjoint-Set operations will be
O(E log V), which is the overall Time Complexity of the algorithm.
BFS (Breadth First Search)

• BFS is graph traversal algorithm.

• It traverses a graph in a breadth-ward motion and uses a queue data


structure.
• Queue data structure works on FIFO

• Time Complexity of BFS = O(V+E) where V is vertices and E is edges.


BFS Algorithm
1. Set pointer to starting vertex
2. Visit the starting vertex and mark it as visited
3. IF vertex at which we have pointer, has adjacent unvisited vertex
Then
{
visit adjacent unvisited vertex and mark it visited, insert it in queue
}
ELSE
{
update pointer to first element of queue, remove first element from queue
}

4. Repeat step 3 until queue is empty and all nodes visited


BFS Example
1. Set pointer to starting
vertex
2. Visit the starting vertex
and mark it as visited
3. IF vertex at which we have
pointer, has adjacent
unvisited vertex
Then
visit adjacent unvisited
vertex and mark it visited,
insert it in queue
ELSE
update pointer to first
element of queue, remove
first element from queue
4. Repeat step 3 until queue
is empty and all nodes
visited

IN Out

C B

C
BFS Example (cont..)
1. Set pointer to starting
vertex
2. Visit the starting vertex
and mark it as visited
3. IF vertex at which we have
pointer, has adjacent
unvisited vertex
Then
visit adjacent unvisited
vertex and mark it visited,
insert it in queue
ELSE
update pointer to first
element of queue, remove
first element from queue
4. Repeat step 3 until queue
is empty and all nodes
visited

D C

E D C
BFS Example (cont..)
1. Set pointer to starting
vertex
2. Visit the starting vertex
and mark it as visited
3. IF vertex at which we have
pointer, has adjacent
unvisited vertex
Then
visit adjacent unvisited
vertex and mark it visited,
insert it in queue
ELSE
update pointer to first
element of queue, remove
first element from queue
4. Repeat step 3 until queue
is empty and all nodes
visited.

F E
DFS (Depth First Search)

• DFS algorithm traverses a graph in a depth-ward motion and uses a


stack data structure
• stack data structure works on LIFO principle

• DFS is a algorithm for searching all the vertices of a graph or tree data
structure.

• Time Complexity of DFS = O(V+E) where V is vertices and E is edges.


DFS Algorithm
1. Push the starting vertex into stack and mark it visited
2. IF vertex at which we have pointer (stack top), has adjacent unvisited
vertex
Then
{
visit adjacent unvisited vertex
mark it visited,
push it into stack,
move pointer to top element of stack
}
ELSE
{
Pop top element of stack
}
3. Repeat step 2 until stack is empty
DFS Example
1. Push the starting vertex into
stack and mark it visited
2. IF vertex at which we have
pointer (stack top), has
adjacent unvisited vertex
Then
visit adjacent unvisited vertex
mark it visited,
push it into stack,
move pointer to top element
of stack
ELSE
Pop top element of stack
3. Repeat step 2 until stack is
empty

stack
DFS Example (Cont..)
1. Push the starting vertex into
stack and mark it visited
2. IF vertex at which we have
pointer (stack top), has
adjacent unvisited vertex
Then
visit adjacent unvisited vertex
mark it visited,
push it into stack,
move pointer to top element
of stack
ELSE
Pop top element of stack
3. Repeat step 2 until stack is
empty

D
B
A stack
DFS Example (Cont..)
1. Push the starting vertex into
stack and mark it visited
2. IF vertex at which we have
pointer (stack top), has
adjacent unvisited vertex
Then
visit adjacent unvisited vertex
mark it visited,
push it into stack,
move pointer to top element
of stack
ELSE
Pop top element of stack
3. Repeat step 2 until stack is
empty

A stack
Dijkstra’s Algorithm

• Dijkstra algorithm is greedy approach algorithm.

• It is used to find out shortest path between source node and


destination node in weighted graph.

• The worst case time complexity of Dijkstra algorithm is O (n2)


Dijkstra’s Algorithm
• Find the shortest path in given graph from vertex A to vertex G using
Dijkstra algorithm.
7

B C
10 3 8
A
5
2 G
4 6 1
10
D E F
9 6

2
Dijkstra’s Algorithm
Step 1: Remove all loops
Step 2: Remove all parallel edges between two Nodes (vertices)
In case of parallel edges, keep the one which has the least weight

By applying step 1 & 2


Dijkstra’s Algorithm
Step 3: Create a weighted matrix table
3(a): set 0 value to the source vertex & infinite value to remaining vertices Minimum ( , + )
3(b): mark smallest unmarked value in matrix table & and mark that vertex visited
3(c): find and update those vertices which are directly connected to mark (visited) vertices
For update: minimum (old destination value, marked value + edge weight) = minimum ( , )
Repeat 3(b) & 3 (c) for all vertices
visited A B C D E F G

A 0 inf inf inf inf inf inf


D 10 inf 4 inf inf inf

C 10 13 inf inf
B 10 13 12 18
E 13 12 18
F 12 14
G 14
Dijkstra’s Algorithm (Method-2)
1. Remove all loops and parallel edges
1 2. set 0 value to the source vertex &
3
2 infinite value to remaining vertices
∞B ∞ E
3. Update the value of vertices by Edge
1
relaxation.
0A 1
We perform edge relaxation if:
3 1 2
d[u] + w(u,v) is less than d[v]
2 5
4 ∞C
∞D 3 Vertex from where
Weight of edge
Vertex at which we
3 we want to move
between u and v
reached
vertices
2
Then
d[v]= d[u] + w(u,v) else d[v] = d[v]
Perform edge relaxation until all the vertices
visited.
Binary Search Tree
• Keep in mind that there is a difference between “Binary Tree (BT)” and
“Binary Search Tree (BST)”

➢Common Things in BST and BT


• Tree always start from the root
• Each elements in the binary tree can have at most (maximum) 2 elements

➢Difference in BST and BT


• BT is unordered while BST is ordered
• Searching, deleting and insertion operation of BT is slower than BST
Binary Search Tree (Example)
• Create a Binary Search tree from given keys or elements.
Keys= 5, 3 , 4, 7, 6, 9, 1 In BST
• left subtree of a node contain
5 only node with key is smaller
than that node key

• right subtree of a node


7 contain only node with key is
3
greater than that node key

4 6 9 Time Complexity BST


1
Worst case is O(n)
Average case is θ (logn)
Dynamic Programming
• Dynamic programing is a technique/approach for designing algorithms

• It is used to find out the optimal solution

• It used optimal substructure and overlapping sub-problems


• when problem is breakdown into smaller recurring problems (like divide and conquer)
What is main difference between Greedy and
Dynamic programing approach?

2 50
A B C
40

5 G

1
D E F
3 6
Understanding Dynamic Programing with Fibonacci series

• Fibonacci function; f(n)= f(n-1) + f (n-2)

f(4)
• For example: n=0 1 2 3 4 5 6
f (3) f(2)
f(n)= 0 1 1 2 3 5 8
f(2) f(1) f(1) f(0)

f(1) f(0)
Network Flow
• A flow network is defined as a directed graph involving a source and a sink
and several other nodes connected with edges.

• In network flow we calculate maximum amount of flow that the network


would allow to flow from source to sink.

• Ford-Fulkerson algorithm is considered as one of the famous algorithm to


calculate maximum flow in the network.

• Ford-Fulkerson is greedy approached algorithm.


Ford-Fulkerson algorithm
• Source is a vertex has only outward edges.

• Sink is a vertex that has only inward edges.

• Minimum capacity of an edge in selected path is bottle neck capacity.

• Bottle neck capacity of every arbitrary path considered as flow.

• Residual capacity= current capacity of edge - flow


Ford-Fulkerson algorithm
• Find out maximum flow in given graph using Ford-Fulkerson Algorithm

10 5
A B C
8

4 7 6
G

3
E
6
Ford-Fulkerson algorithm

• Choose an arbitrary
path from source to
sink
• Find bottle neck
Flow =5 capacity of selected
path (that will be our
flow)
• Write down bottle
neck capacity with
capacity of edges of
selected path
• Find out residual
capacity

Flow =5
Flow =5
Ford-Fulkerson algorithm

Flow =5

Flow =5 + 3

Maximum flow of given network using


Ford Fulkerson algorithm is 8
Flow =5 + 3
Time complexity of Ford-Fulkerson Algorithm
• Choose an arbitrary/augmented path from source to sink

• Compute bottle-neck capacity

• Residual capacity and flow

Total Time Complexity= O ( E * F)


String Matching
• String matching is finding the particular string in text.

Text string Text string


We are studying design and analysis We are studying design and
of algorithm course. Naive Find String: ysi analysis of algorithm course.
Algorithm is used for string Naive Algorithm is used for string
matching. Design and analysis of matching. Design and analysis of
algorithm in an important course. algorithm in an important course.
Naive String Matching Algorithm
Example 1
1- n = text string length(T) 0 1 2 3 4 5 6

2- m= pattern string length(P) T= a b a a b c a

3- For i= 0 to n-m P= a b c P is found in T at i=3

4- If p [1….m] == T[i+1….s+m]

5- Pattern found with shift ‘s’ Example 2


0 1 2 3 4 5 6 7 8 9

T= 0 1 0 0 0 1 1 0 0 1

P= 1 0 0 P is found in T at i= 1 & 6
Naive Algorithm Complexity

• This for loop from 3 to 5 executes for n-m + 1(we need at least m
characters at the end) times and in iteration we are doing m
comparisons. So the total complexity is O (n-m+1).
P, NP, NP-Complete, NP-Hard Problems
• A problem is said to be Polynomial Problem (P) if it can be solved in
polynomial time using deterministic algorithm, like O(nk), where K is
constant.
• Example; linear search, binary search, insertion sort, merge sort
• Polynomial problems can be solve and verify in polynomial time

• A problem that can not be solved in polynomial time but can be


verifiable in polynomial time using non- deterministic algorithm is
known as NP (non- deterministic polynomial) problem
• Example; TSP, Sudoku problem, scheduling problem
Cont..

• The problems that can not be solved in Polynomial time are called NP
hard problems
• Example; subset sum problem

• If a problem A can be reduced into NP-problem B in polynomial time


Then Problem A will be NP-Complete problem.
• Example; Graph coloring problem, longest path problem, Hamiltonian path
problem etc.
How to solve NP-complete Problems
➢NP- Complete Problems can be solved by following approaches:

i. Approximation

ii. Randomization

iii. Parameterization

iv. Heuristic

You might also like