ADS & A Unit-4 Study Material

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

UNIT-4

Syllabus: Divide and conquer, Greedy method


Divide and conquer: General method, applications-Binary search, Finding Maximum and minimum, Quick
sort, Merge sort, Strassen’s matrix multiplication.
Greedy method: General method, applications-Job sequencing with deadlines, knapsack problem, Minimum
cost spanning trees, Single source shortest path problem.
Divide and Conquer Introduction

General Method:

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a dispute on a
huge input, break the input into minor pieces, decide the problem on each of the small pieces, and then
merge the piecewise solutions into a global solution. This mechanism of solving the problem is called the
Divide & Conquer Strategy.

Divide and Conquer algorithm consists of aargument using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the whole problem.
Generally, we can follow the divide-and-conquer approach in a three-step process.

Applications:

The specific computer algorithms are based on the Divide & Conquer approach is

1. Maximum and Minimum Problem


2. Binary Search
3. Sorting (merge sort, quick sort)
4. Strassen’s Matrix Multiplication

Fundamental of Divide & Conquer Strategy:

There are two fundamental of Divide & Conquer Strategy:

1. Relational Formula
2. Stopping Condition

1. Relational Formula: It is the formula that we generate from the given technique. After generation of
Formula we apply D&C Strategy, i.e. we break the problem recursively & solve the broken subproblems.

2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we need to know
that for how much time, we need to apply divide & Conquer. So the condition where the need to stop our
recursion steps of D&C is called as Stopping Condition.

Applications of Divide and Conquer Approach:

Following algorithms are based on the concept of the Divide and Conquer Technique:

1. Binary Search: The binary search algorithm is a searching algorithm, which is also called a half-interval
search or logarithmic search. It works by comparing the target value with the middle element existing
in a sorted array. After making the comparison, if the value differs, then the half that cannot contain
the target will eventually eliminate, followed by continuing the search on the other half. We will again
consider the middle element and compare it with the target value. The process keeps on repeating
until the target value is met. If we found the other half to be empty after ending the search, then it
can be concluded that the target is not present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-exchange sort. It
starts by selecting a pivot value from an array followed by dividing the rest of the array elements into
two sub-arrays. The partition is made by comparing each of the elements with the pivot value. It
compares whether the element holds a greater value or lesser value than the pivot and then sort the
arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts by dividing an
array into sub-array and then recursively sorts each of them. After the sorting is done, it merges them
back.
4. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after Volker
Strassen. It has proven to be much faster than the traditional algorithm when works on large matrices.

Advantages of Divide and Conquer

o Divide and Conquer tend to successfully solve one of the biggest problems, such as the Tower of
Hanoi, a mathematical puzzle. It is challenging to solve complicated problems for which you have no
basic idea, but with the help of the divide and conquer approach, it has lessened the effort as it works
on dividing the main problem into two halves and then solve them recursively. This algorithm is much
faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves simple sub-
problems within the cache memory instead of accessing the slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification and is handled by
systems incorporating parallel processing.

Disadvantages of Divide and Conquer

o Since most of its algorithms are designed by incorporating recursion, so it necessitates high memory
management.
o An explicit stack may overuse the space.
o It may even crash the system if the recursion is performed rigorously greater than the stack present in
the CPU.

1. Binary Search using Divide and Conquer

Binary search is the search technique that works efficiently on sorted lists. Hence, to search an element
into some list using the binary search technique, we must ensure that the list is sorted.

Binary search follows the divide and conquer approach in which the list is divided into two halfs, and the item
is compared with the middle element of the list. If the match is found then, the location of the middle
element is returned. Otherwise, we search into either of the hlaf depending upon the result produced
through the match.
NOTE: Binary search can be implemented on sorted array elements. If the list elements are not arranged in
a sorted manner, we have first to sort them.
Algorithm
Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is the index of the fi
rst array element, 'upper_bound' is the index of the last array element, 'val' is the value to search

Step 1: set beg = lower_bound, end = upper_bound, pos = - 1


Step 2: repeat steps 3 and 4 while beg <=end
Step 3: set mid = (beg + end)/2
Step 4: if a[mid] = val
set pos = mid
print pos
go to step 6
else if a[mid] > val
set end = mid - 1
else
set beg = mid + 1
[end of if]
[end of loop]
Step 5: if pos = -1
print "value is not present in the array"
[end of if]
Step 6: exit

Working of Binary search

Now, let's see the working of the Binary Search Algorithm.

To understand the working of the Binary search algorithm, let's take a sorted array. It will be easy to
understand the working of Binary search with an example.

There are two methods to implement the binary search algorithm -

o Iterative method
o Recursive method

The recursive method of binary search follows the divide and conquer approach.

Let the elements of array are -

Let the element to search is, K = 56

We have to use the below formula to calculate the mid of the array -

1. mid = (beg + end)/2

So, in the given array -

beg = 0

end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.

Now, the element to search is found. So algorithm will return the index of the element matched.

Binary Search complexity

Now, let's see the time complexity of Binary search in the best case, average case, and worst case. We will
also see the space complexity of Binary search.

1. Time Complexity

Case Time Complexity


Best Case O(1)

Average Case O(logn)

Worst Case O(logn)

o Best Case Complexity - In Binary search, best case occurs when the element to search is found in first
comparison, i.e., when the first middle element itself is the element to be searched. The best-case
time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep reducing the
search space till it has only one element. The worst-case time complexity of Binary search is O(logn).

2. Space Complexity
Space Complexity O(1)

o The space complexity of binary search is O(1).

Quick sort

It is an algorithm of Divide & Conquer type.

Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between search that
each element in left sub array is less than or equal to the average element and each element in the right sub-
array is larger than the middle element.

Conquer: Recursively, sort two sub arrays.

Combine: Combine the already sorted array.

Algorithm:
QUICKSORT (array A, int m, int n)
if (n > m)
then
i ← a random index from [m,n]
swap A [i] with A[m]
o ← PARTITION (A, m, n)
QUICKSORT (A, m, o - 1)
QUICKSORT (A, o + 1, n)
Partition Algorithm:

Partition algorithm rearranges the sub arrays in a place.

PARTITION (array A, int m, int n)


x ← A[m]
o←m
for p ← m + 1 to n
do if (A[p] < x)
then o ← o + 1
swap A[o] with A[p]
swap A[m] with A[o]
return o

Rules: Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and
partitions the given array around the picked pivot. There are many different versions of quickSort that pick
pivot in different ways.
1. Always pick first element as pivot.
2. Always pick last element as pivot (implemented below)
3. Pick a random element as pivot.
4. Pick median as pivot.
Example of Quick Sort:
1. 44 33 11 55 77 90 40 60 99 22 88

Let 44 be the Pivot element and scanning done from right to left

Comparing 44 to the right-side elements, and if right-side elements are smaller than 44, then swap it. As 22 is
smaller than 44 so swap them.

22 33 11 55 77 90 40 60 99 44 88

Now comparing 44 to the left side element and the element must be greater than 44 then swap them.
As 55 are greater than 44 so swap them.

22 33 11 44 77 90 40 60 99 55 88

Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 & one right from
pivot element.

22 33 11 40 77 90 44 60 99 55 88

Swap with 77:

22 33 11 40 44 90 77 60 99 55 88
Now, the element on the right side and left side are greater than and smaller than 44 respectively.

Now we get two sorted lists:

And these sublists are sorted under the same process as above done.

These two sorted sublists side by side.

Merging Sublists:

SORTED LISTS

2. Merge sort:

Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to sort the
elements. It is one of the most popular and efficient sorting algorithm. It divides the given list into two equal
halves, calls itself for the two halves and then merges the two sorted halves. We have to define
the merge() function to perform the merging.
The sub-lists are divided again and again into halves until the list cannot be divided further. Then we combine
the pair of one element lists into two-element lists, sorting them in the process. The sorted two-element
pairs is merged into the four-element lists, and so on until we get the sorted list.

Now, let's see the algorithm of merge sort.cepts in Java

Algorithm

In the following algorithm, arr is the given array, beg is the starting element, and end is the last element of
the array.

MERGE_SORT(arr, beg, end)


if beg < end
set mid = (beg + end)/2
MERGE_SORT(arr, beg, mid)
MERGE_SORT(arr, mid + 1, end)
MERGE (arr, beg, mid, end)
end of if
END MERGE_SORT

The important part of the merge sort is the MERGE function. This function performs the merging of two
sorted sub-arrays that are A[beg…mid] and A[mid+1…end], to build one sorted array A[beg…end]. So, the
inputs of the MERGE function are A[], beg, mid, and end.

Working of Merge sort Algorithm

Now, let's see the working of merge sort Algorithm.

To understand the working of the merge sort algorithm, let's take an unsorted array. It will be easier to
understand the merge sort via an example.

Let the elements of array are -

According to the merge sort, first divide the given array into two equal halves. Merge sort keeps dividing the
list into equal parts until it cannot be further divided.

As there are eight elements in the given array, so it is divided into two arrays of size 4.
Now, again divide these two arrays into halves. As they are of size 4, so divide them into new arrays of size 2.

Now, again divide these arrays to get the atomic value that cannot be further divided.

Now, combine them in the same manner they were broken.

In combining, first compare the element of each array and then combine them into another array in sorted
order.

So, first compare 12 and 31, both are in sorted positions. Then compare 25 and 8, and in the list of two
values, put 8 first followed by 25. Then compare 32 and 17, sort them and put 17 first followed by 32. After
that, compare 40 and 42, and place them sequentially.

In the next iteration of combining, now compare the arrays with two data values and merge them into an
array of found values in sorted order.

Now, there is a final merging of the arrays. After the final merging of above arrays, the array will look like -
Now, the array is completely sorted.

3. Strassen’s Matrix Multiplication


Divide and Conquer
Following is simple Divide and Conquer method to multiply two square matrices.
1) Divide matrices A and B in 4 sub-matrices of size N/2 x N/2 as shown in the below diagram.
2) Calculate following values recursively. ae + bg, af + bh, ce + dg and cf + dh.

A, B and C are
the square matrices of size N
a,b,c and d are the submatrices of A of size N/2xN/2

e,f,g and h are the submatrices of B of size N/2xN/2

In the above method, we do 8 multiplications for matrices of size N/2 x N/2 and 4 additions. Addition of
two matrices takes O(N2) time. So the time complexity can be written as
T(N) = 8T(N/2) + O(N 2)

From Master's Theorem, time complexity of above method is O(N3)

which is unfortunately same as the above naive method.

Simple Divide and Conquer also leads to O(N3), can there be a better way?

In the above divide and conquer method, the main component for high time complexity is 8 recursive calls.
The idea of Strassen’s method is to reduce the number of recursive calls to 7. Strassen’s method is similar
to above simple divide and conquer method in the sense that this method also divide matrices to sub-
matrices of size N/2 x N/2 as shown in the above diagram, but in Strassen’s method, the four sub-matrices
of result are calculated using following formulae.

A, B and C are the square matrices of size N


a,b and c are the submatrices of A of size N/2xN/2
e,f and g are the submatrices of B of size N/2xN/2
p1,p2,p3,p4,p5,p6,p7 are submatrices of size N/2xN/2
Time Complexity of Strassen’s Method

Addition and Subtraction of two matrices takes O(N 2) time. So time complexity can be written as

T(N) = 7T(N/2) + O(N2)

From Master's Theorem, time complexity of above method is


O(NLog7) which is approximately O(N2.8074)
Generally Strassen’s Method is not preferred for practical applications for following reasons.
1) The constants used in Strassen’s method are high and for a typical application Naive method works
better.
2) For Sparse matrices, there are better methods especially designed for them.
3) The submatrices in recursion take extra space.
4) Because of the limited precision of computer arithmetic on noninteger values, larger errors accumulate
in Strassen’s algorithm than in Naive Method

Greedy Algorithm
The greedy method is one of the strategies like Divide and conquer used to solve the problems. This method
is used for solving optimization problems. An optimization problem is a problem that demands either
maximum or minimum results. Let's understand through some terms.

The Greedy method is the simplest and straightforward approach. It is not an algorithm, but it is a technique.
The main function of this approach is that the decision is taken on the basis of the currently available
information. Whatever the current information is present, the decision is made without worrying about the
effect of the current decision in future.

This technique is basically used to determine the feasible solution that may or may not be optimal. The
feasible solution is a subset that satisfies the given criteria. The optimal solution is the solution which is the
best and the most favorable solution in the subset. In the case of feasible, if more than one solution satisfies
the given criteria then those solutions will be considered as the feasible, whereas the optimal solution is the
best solution among all the solutions.

Characteristics of Greedy method

The following are the characteristics of a greedy method:

Skip Ad

o To construct the solution in an optimal way, this algorithm creates two sets where one set contains all
the chosen items, and another set contains the rejected items.
o A Greedy algorithm makes good local choices in the hope that the solution should be either feasible or
optimal.

Components of Greedy Algorithm

The components that can be used in the greedy algorithm are:

o Candidate set: A solution that is created from the set is known as a candidate set.
o Selection function: This function is used to choose the candidate or subset which can be added in the
solution.
o Feasibility function: A function that is used to determine whether the candidate or subset can be
used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or the partial solution.
o Solution function: This function is used to intimate whether the complete function has been reached
or not.

Applications of Greedy Algorithm


o It is used in finding the shortest path.
o It is used to find the minimum spanning tree using the prim's algorithm or the Kruskal's algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.

Pseudo code of Greedy Algorithm


Algorithm Greedy (a, n)
{
Solution : = 0;
for i = 0 to n do
{
x: = select(a);
if feasible(solution, x)
{
Solution: = union(solution , x)
}
return solution;
}}

The above is the greedy algorithm. Initially, the solution is assigned with zero value. We pass the array and
number of elements in the greedy algorithm. Inside the for loop, we select the element one by one and
checks whether the solution is feasible or not. If the solution is feasible, then we perform the union.

Let's understand through an example.

Suppose there is a problem 'P'. I want to travel from A to B shown as below:

P:A→B

The problem is that we have to travel this journey from A to B. There are various solutions to go from A to B.
We can go from A to B by walk, car, bike, train, aeroplane, etc. There is a constraint in the journey that we
have to travel this journey within 12 hrs. If I go by train or aeroplane then only, I can cover this distance
within 12 hrs. There are many solutions to this problem but there are only two solutions that satisfy the
constraint.

If we say that we have to cover the journey at the minimum cost. This means that we have to travel this
distance as minimum as possible, so this problem is known as a minimization problem. Till now, we have two
feasible solutions, i.e., one by train and another one by air. Since travelling by train will lead to the minimum
cost so it is an optimal solution. An optimal solution is also the feasible solution, but providing the best result
so that solution is the optimal solution with the minimum cost. There would be only one optimal solution.
The problem that requires either minimum or maximum result then that problem is known as an optimization
problem. Greedy method is one of the strategies used for solving the optimization problems.

Disadvantages of using Greedy algorithm

Greedy algorithm makes decisions based on the information available at each phase without considering the
broader problem. So, there might be a possibility that the greedy solution does not give the best solution for
every problem.

It follows the local optimum choice at each stage with a intend of finding the global optimum. Let's
understand through an example.

Consider the graph which is given below:

We have to travel from the source to the destination at the minimum cost. Since we have three feasible
solutions having cost paths as 10, 20, and 5. 5 is the minimum cost path so it is the optimal solution. This is
the local optimum, and in this way, we find the local optimum at each stage in order to calculate the global
optimal solution.

Applications:

1. Knapsack Problem
2. Job Sequencing with Deadlines
3. Minimum cost spanning tree
4. Single source shortest path

1. Knapsack Problem

The Knapsack Problem is a famous Dynamic Programming Problem that falls in the optimization category.
It derives its name from a scenario where, given a set of items with specific weights and assigned values, the
goal is to maximize the value in a knapsack while remaining within the weight constraint.

Each item can only be selected once, as we don’t have multiple quantities of any item.

In the below example, the weights of different honeycombs and the values associated with them are
provided. The goal is to maximize the value of honey that can be fit in the bear’s knapsack.

Example

Let’s take the example of Mary, who wants to carry some fruits in her knapsack and maximize the profit she
makes. She should pick them such that she minimizes weight ( <= bag's<=bag′s capacitycapacity)
and maximizes value.

Here are the weights and profits associated with the different fruits:

Items: { Apple, Orange, Banana, Melon }

Weights: { 2, 3, 1, 4 }

Profits: { 4, 5, 3, 7 }

Knapsack Capacity: 5

Fruits Picked by Mary:

Banana + Melon

💰Profit = 10

Banana and Melon is the best combination, as it gives us the maximum profit (10) and the total weight does
not exceed the knapsack’s capacity (5).

The problem can be tackled using various approaches: brute force, top-down with
memoization and bottom-up are all potentially viable approaches to take.

The latter two approaches (top-down with memoization and bottom-up) make use of Dynamic Programming.
In more complex situations, these would likely be the much more efficient approaches to use.

2. Job Sequencing with Deadlines


In job sequencing problem, the objective is to find a sequence of jobs, which is completed within their
deadlines and gives maximum profit.
Solution
Let us consider, a set of n given jobs which are associated with deadlines and profit is earned, if a job is
completed by its deadline. These jobs need to be ordered in such a way that there is maximum profit.
It may happen that all of the given jobs may not be completed within their deadlines.
Assume, deadline of ith job Ji is di and the profit received from this job is pi. Hence, the optimal solution of
this algorithm is a feasible solution with maximum profit.
Thus, D(i)>0D(i)>0 for 1⩽i⩽n1⩽i⩽n.
Initially, these jobs are ordered according to profit, i.e. p1⩾p2⩾p3⩾...⩾pnp1⩾p2⩾p3⩾...⩾pn.

Algorithm: Job-Sequencing-With-Deadline (D, J, n, k)


D(0) := J(0) := 0
k := 1
J(1) := 1 // means first job is selected
fori = 2 … n do
r := k
while D(J(r)) > D(i) and D(J(r)) ≠ r do
r := r – 1
if D(J(r)) ≤ D(i) and D(i) > r then
for l = k … r + 1 by -1 do
J(l + 1) := J(l)
J(r + 1) := i
k := k + 1
Analysis
In this algorithm, we are using two loops, one is within another. Hence, the complexity of this algorithm
is O(n2)O(n2).
Example
Let us consider a set of given jobs as shown in the following table. We have to find a sequence of jobs, which
will be completed within their deadlines and will give maximum profit. Each job is associated with a deadline
and profit.

Job J1 J2 J3 J4 J5

Deadline 2 1 3 2 1

Profit 60 100 20 40 20

Solution
To solve this problem, the given jobs are sorted according to their profit in a descending order. Hence, after
sorting, the jobs are ordered as shown in the following table.
Job J2 J1 J4 J3 J5

Deadline 1 2 2 3 1

Profit 100 60 40 20 20

From this set of jobs, first we select J2, as it can be completed within its deadline and contributes maximum
profit.
 Next, J1 is selected as it gives more profit compared to J4.
 In the next clock, J4 cannot be selected as its deadline is over, hence J3 is selected as it executes
within its deadline.
 The job J5 is discarded as it cannot be executed within its deadline.
Thus, the solution is the sequence of jobs (J2, J1, J3), which are being executed within their deadline and gives
maximum profit.
Total profit of this sequence is 100 + 60 + 20 = 180.

Job Sequencing With Deadlines-


 You are given a set of jobs.
 Each job has a defined deadline and some profit associated with it.
 The profit of a job is given only when that job is completed within its deadline.
 Only one processor is available for processing all the jobs.
 Processor takes one unit of time to complete a job.
Approach to Solution-
A feasible solution would be a subset of jobs where each job of the subset gets completed within its
deadline.
 Value of the feasible solution would be the sum of profit of all the jobs contained in the subset.
 An optimal solution of the problem would be a feasible solution which gives the maximum profit.

Greedy Algorithm-

Greedy Algorithm is adopted to determine how the next job is selected for an optimal solution.
The greedy algorithm described below always gives an optimal solution to the job sequencing problem-
Step-01:

 Sort all the given jobs in decreasing order of their profit.

Step-02:

 Check the value of maximum deadline.


 Draw a Gantt chart where maximum time on Gantt chart is the value of maximum deadline.

Step-03:

 Pick up the jobs one by one.


 Put the job on Gantt chart as far as possible from 0 ensuring that the job gets completed before its
deadline.

PRACTICE PROBLEM BASED ON JOB SEQUENCING WITH DEADLINES-


Problem-
Given the jobs, their deadlines and associated profits as shown-

Jobs J1 J2 J3 J4 J5 J6

Deadlines 5 3 3 2 4 2

Profits 200 180 190 300 120 100

Answer the following questions-


1. Write the optimal schedule that gives maximum profit.
2. Are all the jobs completed in the optimal schedule?
3. What is the maximum earned profit?
Solution-
Step-01:
Sort all the given jobs in decreasing order of their profit-

Jobs J4 J1 J3 J2 J5 J6

Deadlines 2 5 3 3 4 2

Profits 300 200 190 180 120 100

Step-02:

Value of maximum deadline = 5.


So, draw a Gantt chart with maximum time on Gantt chart = 5 units as shown-
Now,
 We take each job one by one in the order they appear in Step-01.
 We place the job on Gantt chart as far as possible from 0.

Step-03:

 We take job J4.


 Since its deadline is 2, so we place it in the first empty cell before deadline 2 as-

Step-04:
We take job J1.
 Since its deadline is 5, so we place it in the first empty cell before deadline 5 as-

Step-05:

 We take job J3.


 Since its deadline is 3, so we place it in the first empty cell before deadline 3 as-

Step-06:
 We take job J2.
 Since its deadline is 3, so we place it in the first empty cell before deadline 3.
 Since the second and third cells are already filled, so we place job J2 in the first cell as-

Step-07:

 Now, we take job J5.


 Since its deadline is 4, so we place it in the first empty cell before deadline 4 as-

Now,
 The only job left is job J6 whose deadline is 2.
 All the slots before deadline 2 are already occupied.
 Thus, job J6 can not be completed.

Now, the given questions may be answered as-

Part-01:

The optimal schedule is-


J2 , J4 , J3 , J5 , J1
This is the required order in which the jobs must be completed in order to obtain the maximum profit.

Part-02:
 All the jobs are not completed in optimal schedule.
 This is because job J6 could not be completed within its deadline.

Part-03:

Maximum earned profit


= Sum of profit of all the jobs in optimal schedule
= Profit of job J2 + Profit of job J4 + Profit of job J3 + Profit of job J5 + Profit of job J1
= 180 + 300 + 190 + 120 + 200
= 990 units

To gain better understanding about Job Sequencing With Deadlines,

4. Minimum Cost Spanning Tree

What is a Spanning Tree?

A spanning tree is a subset of Graph G, which has all the vertices covered with minimum possible number of
edges. Hence, a spanning tree does not have cycles and it cannot be disconnected..
By this definition, we can draw a conclusion that every connected and undirected Graph G has at least one
spanning tree. A disconnected graph does not have any spanning tree, as it cannot be spanned to all its
vertices.
We found three spanning trees off one complete graph. A complete undirected graph can have maximum nn-
2
number of spanning trees, where n is the number of nodes. In the above addressed example, n is
3, hence 33−2 = 3 spanning trees are possible.

What is minimum cost spanning tree:


Given an undirected and connected graph G=(V,E), a spanning tree of the graph G is a tree that spans G (that
is, it includes every vertex of G) and is a subgraph of G (every edge in the tree belongs to G)

General Properties of Spanning Tree


We now understand that one graph can have more than one spanning tree. Following are a few properties
of the spanning tree connected to graph G −
 A connected graph G can have more than one spanning tree.
 All possible spanning trees of graph G, have the same number of edges and vertices.
 The spanning tree does not have any cycle (loops).
 Removing one edge from the spanning tree will make the graph disconnected, i.e. the spanning tree
is minimally connected.
 Adding one edge to the spanning tree will create a circuit or loop, i.e. the spanning tree is maximally
acyclic.
Minimum Spanning Tree (MST)
In a weighted graph, a minimum spanning tree is a spanning tree that has minimum weight than all other
spanning trees of the same graph. In real-world situations, this weight can be measured as distance,
congestion, traffic load or any arbitrary value denoted to the edges.
Minimum Spanning-Tree Algorithm
We shall learn about two most important spanning tree algorithms here −
 Kruskal's Algorithm
 Prim's Algorithm
Both are greedy algorithms.

1. Kruskal’s Algorithm

Kruskal's Algorithm is used to find the minimum spanning tree for a connected weighted graph. The main
target of the algorithm is to find the subset of edges by using which we can traverse every vertex of the
graph. It follows the greedy approach that finds an optimum solution at every stage instead of focusing on a
global optimum.

How does Kruskal's algorithm work?

In Kruskal's algorithm, we start from edges with the lowest weight and keep adding the edges until the goal is
reached. The steps to implement Kruskal's algorithm are listed as follows -

o First, sort all the edges from low weight to high.


o Now, take the edge with the lowest weight and add it to the spanning tree. If the edge to be added
creates a cycle, then reject the edge.
o Continue to add the edges until we reach all vertices, and a minimum spanning tree is created.

The applications of Kruskal's algorithm are -

o Kruskal's algorithm can be used to layout electrical wiring among cities.


o It can be used to lay down LAN connections.

Example of Kruskal's algorithm

Now, let's see the working of Kruskal's algorithm using an example. It will be easier to understand Kruskal's
algorithm using an example.

Suppose a weighted graph is -


The weight of the edges of the above graph is given in the below table -

Edge AB AC AD AE BC CD DE

Weight 1 7 10 5 3 4 2

Now, sort the edges given above in the ascending order of their weights.

Edge AB DE BC CD AE AC AD

Weight 1 2 3 4 5 7 10

Now, let's start constructing the minimum spanning tree.

Step 1 - First, add the edge AB with weight 1 to the MST.


Step 2 - Add the edge DE with weight 2 to the MST as it is not creating the cycle.

Step 3 - Add the edge BC with weight 3 to the MST, as it is not creating any cycle or loop.
Step 4 - Now, pick the edge CD with weight 4 to the MST, as it is not forming the cycle.

Step 5 - After that, pick the edge AE with weight 5. Including this edge will create the cycle, so discard it.

Step 6 - Pick the edge AC with weight 7. Including this edge will create the cycle, so discard it.

Step 7 - Pick the edge AD with weight 10. Including this edge will also create the cycle, so discard it.

So, the final minimum spanning tree obtained from the given weighted graph by using Kruskal's algorithm is -
The cost of the MST is = AB + DE + BC + CD = 1 + 2 + 3 + 4 = 10.

Now, the number of edges in the above tree equals the number of vertices minus 1. So, the algorithm stops
here.

Complexity of Kruskal's algorithm

Now, let's see the time complexity of Kruskal's algorithm.

o Time Complexity
The time complexity of Kruskal's algorithm is O(E logE) or O(V logV), where E is the no. of edges, and V
is the no. of vertices.

2. Prim’s Algorithm

Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree from a graph. Prim's
algorithm finds the subset of edges that includes every vertex of the graph such that the sum of the weights
of the edges can be minimized.

Prim's algorithm starts with the single node and explores all the adjacent nodes with all the connecting edges
at every step. The edges with the minimal weights causing no cycles in the graph got selected.

How does the prim's algorithm work?

Prim's algorithm is a greedy algorithm that starts from one vertex and continue to add the edges with the
smallest weight until the goal is reached. The steps to implement the prim's algorithm are given as follows -

o First, we have to initialize an MST with the randomly chosen vertex.


o Now, we have to find all the edges that connect the tree in the above step with the new vertices.
From the edges found, select the minimum edge and add it to the tree.
o Repeat step 2 until the minimum spanning tree is formed.

The applications of prim's algorithm are -

o Prim's algorithm can be used in network designing.


o It can be used to make network cycles.
o It can also be used to lay down electrical wiring cables.

Example of prim's algorithm

Now, let's see the working of prim's algorithm using an example. It will be easier to understand the prim's
algorithm using an example.

Suppose, a weighted graph is -

Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.
Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two edges from vertex B
that are B to C with weight 10 and edge B to D with weight 4. Among the edges, the edge BD has the
minimum weight. So, add it to the MST.

Step 3 - Now, again, choose the edge with the minimum weight among all the other edges. In this case, the
edges DE and CD are such edges. Add them to MST and explore the adjacent of C, i.e., E and A. So, select the
edge DE and add it to the MST.

Step 4 - Now, select the edge CD, and add it to the MST.
Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a cycle to the graph.
So, choose the edge CA and add it to the MST.

So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of the MST is
given below -

Cost of MST = 4 + 2 + 1 + 3 = 10 units.

Complexity of Prim's algorithm

Now, let's see the time complexity of Prim's algorithm. The running time of the prim's algorithm depends
upon using the data structure for the graph and the ordering of edges. Below table shows some choices -

o Time Complexity

Data structure used for the minimum edge weight Time Complexity
Adjacency matrix, linear searching O(|V|2)

Adjacency list and binary heap O(|E| log |V|)

Adjacency list and Fibonacci heap O(|E|+ |V| log |V|)

Prim's algorithm can be simply implemented by using the adjacency matrix or adjacency list graph
representation, and to add the edge with the minimum weight requires the linearly searching of an array of
weights. It requires O(|V|2) running time. It can be improved further by using the implementation of heap to
find the minimum weight edges in the inner loop of the algorithm.

3. Single Source Shortest Paths

Introduction:

In a shortest- paths problem, we are given a weighted, directed graphs G = (V, E), with weight function w: E
→ R mapping edges to real-valued weights. The weight of path p = (v 0,v1,..... vk) is the total of the weights of
its constituent edges:

We define the shortest - path weight from u to v by δ(u,v) = min (w (p): u→v), if there is a path from u to v,
and δ(u,v)= ∞, otherwise.

The shortest path from vertex s to vertex t is then defined as any path p with weight w (p) = δ(s,t).

The breadth-first- search algorithm is the shortest path algorithm that works on unweighted graphs, that is,
graphs in which each edge can be considered to have unit weight.

In a Single Source Shortest Paths Problem, we are given a Graph G = (V, E), we want to find the shortest path
from a given source vertex s ∈ V to every vertex v ∈ V.

Variants:

There are some variants of the shortest path problem.

o Single- destination shortest - paths problem: Find the shortest path to a given destination vertex t
from every vertex v. By shift the direction of each edge in the graph, we can shorten this problem to a
single - source problem.
o Single - pair shortest - path problem: Find the shortest path from u to v for given vertices u and v. If
we determine the single - source problem with source vertex u, we clarify this problem also.
Furthermore, no algorithms for this problem are known that run asymptotically faster than the best
single - source algorithms in the worst case.
o All - pairs shortest - paths problem: Find the shortest path from u to v for every pair of vertices u and
v. Running a single - source algorithm once from each vertex can clarify this problem; but it can
generally be solved faster, and its structure is of interest in the own right.

Shortest Path: Existence:

If some path from s to v contains a negative cost cycle then, there does not exist the shortest path.
Otherwise, there exists a shortest s - v that is simple.

Dijkstra’s Algorithm – Single Source Shortest Path Algorithm

Dijkstra’s Algorithm is also known as Single Source Shortest Path (SSSP) problem. It is used to find the
shortest path from source node to destination node in graph.

The graph is widely accepted data structure to represent distance map. The distance between cities
effectively represented using graph.

 Dijkstra proposed an efficient way to find the single source shortest path from the weighted graph.
For a given source vertex s, the algorithm finds the shortest path to every other vertex v in the graph.
 Assumption : Weight of all edges is non-negative.
 Steps of the Dijkstra’s algorithm are explained here:

1. Initializes the distance of source vertex to zero and remaining all other vertices to infinity.

2. Set source node to current node and put remaining all nodes in the list of unvisited vertex list. Compute
the tentative distance of all immediate neighbour vertex of the current node.

3. If the newly computed value is smaller than the old value, then update it.
For example, C is the current node, whose distance from source S is dist (S, C) = 5.

 Consider N is the neighbour of C and weight of edge


(C, N) is 3. So the distance of N from source via C would be 8.
 If the distance of N from source was already computed and if it is greater than 8 then relax edge (S, N)
and update it to 8, otherwise don’t update it.

d(S, N) = 11
d(S, N) = 7
d(S, C) + d(C, N) < d(S, N) ⇒ Relax edge (S,
d(S, C) + d(C, N) > d(S, N) ⇒ Don’t
N)
update d(S, N)
Update d(S, N) = 8
Weight updating in Dijkstra’s algorithm
4. When all the neighbours of a current node are explored, mark it as visited. Remove it from unvisited
vertex list. Mark the vertex from unvisited vertex list with minimum distance and repeat the procedure.

5. Stop when the destination node is tested or when unvisited vertex list becomes empty.

Examples

Problem: Suppose Dijkstra’s algorithm is run on the following graph, starting at node A

1. Draw a table showing the intermediate distance values of all the nodes at each iteration of the
algorithm.
2. Show the final shortest path tree.

Solution:

Here, source vertex is A.

dist[u] indicates distance of vertex u from source

π[u] indicates parent / previous node of u

Initialization:

dist[source] = 0 ⇒dist[A] = 0

π[source] = undefined ⇒ π[A] = NIL

dist[B] = dist[C] = dist[D] = dist[E] = dist [F] = dist[G]= dist[H] = ∞

π[B] = π[C] = π[D] = π[E] = π[F] = π[G] = π[H]= NIL

Vertex u A B C D E F G H

dist[u] 0 ∞ ∞ ∞ ∞ ∞ ∞ ∞

π[u] NIL NIL NIL NIL NIL NIL NIL NIL


Iteration 1:

u = unprocessed vertex in Q having minimum dist[u] = A

Adjacent[A] = {B, E, F}

val[B] = dist[A] + weight(A, B)

=0+1

=1

Here, val[B] <dist[B], so update dist[B]

dist[B] = 1, and π[B] = A

val[E] = dist[A] + weight(A, E)

=0+4
=4

Here, val[E] <dist[E], so update dist[E]

dist[E] = 4 and π[6] = A

val[F] = dist[A] + weight(A, F)

=0+8

=8

Here, val[F] <dist[F], so update dist[F]

dist[F] = 8 and π[F] = A

Vertex u A B C D E F G H

dist[u] 0 1 ∞ ∞ 4 8 ∞ ∞

π[u] NIL A NIL NIL A A NIL NIL


Iteration 2:

u = unprocessed vertex in Q having minimum dist[u] = B

Adjacent[B] = {C, F, G}

val[C] = dist[B] + weight(B, C)

=1+2

=3

Here, val[C] <dist[C], so update dist[C]

dist[C] = 3 and π[C] = B

val[F] = dist[B] + weight(B, F)

=1+6

=7

Here, val[F] <dist[F], so update dist[F]


dist[F] = 7 and π[F] = B

val[G] = dist[B] + weight(B, G)

=1+6

=7

Here, val[G] <dist[G], so update dist[G]

dist[G] = 7 and π[G] = B

Vertex u A B C D E F G H

dist[u] 0 1 3 ∞ 4 7 7 ∞

π[u] NIL A B NIL A B B NIL


Iteration 3:

u = unprocessed vertex in Q having minimum dist[u] = C

Adjacent [C] = {D, G}

val[D] = dist[C] + weight(C, D)

=3+1

=4

Here, val[D] <dist[D], so update dist[D]

dist[D] = 4 and π[D] = C

val[G] = dist[C] + weight(C, G)

=3+2

=5

Here, val[G] <dist[G], so update dist[G]

dist[G] = 5 and π[G] = C


Vertex u A B C D E F G H

dist[u] 0 1 3 4 4 7 5 ∞

π[u] NIL A B C A B C NIL


Iteration 4:

u = unprocessed vertex in Q having minimum dist[u] = E

Adjacent[E] = {F}

val[F] = dist[E] + weight(E, F)

=4+5

=9

Here, val[F] >dist[F], so no change in table

Vertex u A B C D E F G H

dist[u] 0 1 3 4 4 7 5 ∞

π[u] NIL A B C A B C NIL


Iteration 5:

u = unprocessed vertex in Q having minimum dist[u] = D

Adjacent[D] = {G, H}

val[G] = dist[D] + weight(D, G)

=4+1

=5

Here, val[G] = dist[G], so don’t update dist[G]

val[H] = dist[D] + weight(D, H)

=4+4
=8

Here, val[H] <dist[H], so update dist[H]

dist[H] = 8 and π[H] = D

Vertex u A B C D E F G H

dist[u] 0 1 3 4 4 7 5 8

π [u] NIL A B C A B D D
Iteration 6:

u = unprocessed vertex in Q having minimum dist[u] = G

Adjacent[G] = { F, H }

val[F] = dist[G] + weight(G, F)

=5+1

=6

Here, val[F] <dist[F], so update dist[F]

dist[F] = 6 and π[F] = G

val[H] = dist[G] + weight(G, H)

=5+1

=6

Here, val[H] <dist[H], so update dist[H]

dist[H] = 6 and π[H] = G

Vertex u A B C D E F G H

dist[u] 0 1 3 4 4 6 5 6

π [u] NIL A B C A G C G
Iteration 7:

u = unprocessed vertex in Q having minimum dist[u] = F

Adjacent[F] = { }

So, no change in table

Vertex u A B C D E F G H

dist[u] 0 1 3 4 4 6 5 6

p[u] NIL A B C A G C G
Iteration 8:

u = unprocessed vertex in Q having minimum dist[u] = H

Adjacent[H] = { }

So, no change in table

Vertex u A B C D E F G H

dist[u] 0 1 3 4 4 6 5 6

p[u] NIL A B C A G C G
We can easily derive the shortest path tree for given graph from above table. In the table, p[u] indicates the
parent node of vertex u. The shortest path tree is shown in following figure
10 Marks Questions
1. Explain in detail quick sorting method? Provide a complete analysis of quick sort? May – 2020, Dec-
2018
2. Describe and write quick sort algorithm. Show how quick sort sorts the following sequence ofkeys
310, 285, 179, 652, 351, 423, 861, 254, 450, 520. Analyze time complexity of thealgorithm.?Dec-2017
3. What are greedy algorithms? What are the characteristics? Explain any greedy algorithm with
example? May – 2020, May-2017
4. Whataregreedyalgorithms?Whataretheircharacteristics?Explainanygreedyalgorithmwithexample.?
Dec-2018
5. Sort the list 415,213,700,515,712,715 using merge sort algorithm? Also explain the time complexity of
merge sort algorithm? May-2019, May-2016
6. Explain thealgorithm of mergesort. Compute thetimecomplexity of mergesort.
Alsosortthelist415,213,700,515,712,715usingmerge sort.? Dec-2016

7. Explain Strassen's algorithm for matrix multiplication with the help of anexample. May-2019, May-
2017, Dec-2016

8. Discussthestrassen’smatrixmultiplicationalgorithmindetail.Also,giveillustrativeexample toexplainthe
efficiencyachievedthroughthisalgorithm.? Dec-2018

9. Write a short note for the following: May-2019, May-2016

a. Divide and conquertechnique

b. Greedyalgorithm

10. Using Dijkstra’s algorithm find the shortest path from A to D for the following graph. May-2019, May-
2017
7
B C
2 3
2 3
A E 2 D
F
6 2
1
G 2
4
H

11. Describe Dijkastra' s algorithm to solve single-sourceshortest path problem. What is its
time complexity?Dec-2017

12. Write an algorithm based on divide-and-conquer strategy to search an element in a given list. Assume
that the elements of list are in sorted order. May-2018, Dec-2017
13. Define spanning tree. Write Kruskal’s algorithm for finding minimum cost spanning tree. Describe
how Kruskal’s algorithm is different from Prim’s algorithm for finding minimum cost spanningtree.
May-2018, Dec-2019,Dec-2017

14. Suppose we use Dijkastra’s greedy, single source shortest path algorithm on an unidirectional graph.
What constraint must we have for the algorithm to work and why?

15. Compare the various programming paradigms such as divide-and-conquer, dynamic programming
and greedyapproach. May-2018

16. Explain finding maximum and minimum using divide and conquer with suitable example?

17. Explain about knapsack problem using greedy method?

Unit-4 2 marks questions:


1. What is brute force algorithm?
A straightforward approach, usually based directly on the problem‘s statement and definitions of the
concepts involved.
2. List the strength and weakness of brute force algorithm.
Strengths
a. wide applicability,
b. simplicity
c. yields reasonable algorithms for some important
problems(e.g., matrix multiplication, sorting, searching, string matching)

Weaknesses

a. rarely yields efficient algorithms


b. some brute-force algorithms are unacceptably slow not as constructive as some
otherdesign techniques
3. What is exhaustive search?
A brute force solution to a problem involving search for an element with a special property, usually among
combinatorial objects such as permutations, combinations, or subsets of a set.
4. Give the general plan of exhaustive search.
Method:
• generate a list of all potential solutions to the problem in a systematic manner
• evaluate potential solutions one by one, disqualifying infeasible ones and, for an optimization
problem, keeping track of the best one found so far
• when search ends, announce the solution(s) found
5. Give the general plan for divide-and-conquer algorithms.
The general plan is as follows
 A problems instance is divided into several smaller instances of the same problem,
ideallyabout the same size
 The smaller instances are solved, typically recursively
 If necessary the solutions obtained are combined to get the solution of the original problem
Given a function to compute on ‗n‘ inputs the divide-and-comquer strategy suggests splitting the inputs
into‘k‘ distinct susbsets, 1<k <n, yielding ‗k‘ subproblems. The subproblems must be solved, and then
amethod must be found to combine subsolutions into a solution of the whole. If the subproblems are
stillrelatively large, then the divide-and conquer strategy can possibly be reapplied.
6. List the advantages of Divide and Conquer Algorithm
Solving difficult problems, Algorithm efficiency, Parallelism, Memory access, Round off control.
7. What is the general divide-and-conquer recurrence relation?
An instance of size ‗n‘ can be divided into several instances of size n/b, with ‗a‘ of them needing to be
solved. Assuming that size ‗n‘ is a power of ‗b‘, to simplify the analysis, the following recurrence for the
running time is obtained:
T(n) = aT(n/b)+f(n)
Where f(n) is a function that accounts for the time spent on dividing the problem into smaller ones and
oncombining their solutions.
8. Define mergesort.
Mergesort sorts a given array A[0..n-1] by dividing it into two halves a[0..(n/2)-1] and A[n/2..n-1] sorting
each of them recursively and then merging the two smaller sorted arrays into a single sorted one.
9. List the Steps in Merge Sort
1. Divide Step: If given array A has zero or one element, return S; it is already sorted. Otherwise,
divide A into two arrays, A1 and A2, each containing about half of the elements of A.
2. Recursion Step: Recursively sort array A1 and A2.
3. Conquer Step: Combine the elements back in A by merging the sorted arrays A1 and A2 into a
sorted sequence
10. List out Disadvantages of Divide and Conquer Algorithm
 Conceptual difficulty
 Recursion overhead
 Repeated subproblems
11. Define Quick Sort
Quick sort is an algorithm of choice in many situations because it is not difficult to implement, it is a good
\"general purpose\" sort and it consumes relatively fewer resources during execution.
12. List out the Advantages in Quick Sort
 It is in-place since it uses only a small auxiliary stack.
 It requires only n log(n) time to sort n items.
 It has an extremely short inner loop
 This algorithm has been subjected to a thorough mathematical analysis, a very precise
statementcan be made about performance issues.
13. List out the Disadvantages in Quick Sort
• It is recursive. Especially if recursion is not available, the implementation is extremely complicated.
• It requires quadratic (i.e., n2) time in the worst-case.
• It is fragile i.e., a simple mistake in the implementation can go unnoticed and cause it to
performbadly.
14. What is the difference between quicksort and mergesort?
Both quicksort and mergesort use the divide-and-conquer technique in which the given array is partitioned
into subarrays and solved. The difference lies in the technique that the arrays are partitioned. For
mergesort the arrays are partitioned according to their position and in quicksort they are partitioned
according to the element values.
15. What is binary search?
Binary search is a remarkably efficient algorithm for searching in a sorted array. It works by comparing
a search key K with the arrays middle element A[m]. If they match the algorithm stops; otherwise the
same operation is repeated recursively for the first half of the array if K < A[m] and the second half if
K > A[m].

16. List out the 4 steps in Strassen’s Method?


1. Divide the input matrices A and B into n/2 * n/2 submatrices, as in equation (1).
2. Using Θ(n2) scalar additions and subtractions, compute 14 n/2 * n/2 matrices A1, B1, A2, B2, …, A7, B7.
3. Recursively compute the seven matrix products Pi =AiBi for i =1, 2, 7.
4. Compute the desired submatrices r, s, t, u of the result matrix C by adding and/or subtracting
variouscombinations of the Pi matrices, using only Θ(n2) scalar additions and subtractions.

17. Define dynamic programming.


Dynamic programming is an algorithm design method that can be used when a solution to the problem
is viewed as the result of sequence of decisions.
Dynamic programming is a technique for solving problems with overlapping subproblems. These sub
problems arise from a recurrence relating a solution to a given problem with solutions to its smaller sub
problems only once and recording the results in a table from which the solution to the original problem
is obtained. It was invented by a prominent U.S Mathematician, Richard Bellman in the 1950s.
18. What are the features of dynamic programming?
a. Optimal solutions to sub problems are retained so as to avoid recomputing their values.
b. Decision sequences containing subsequences that are sub optimal are not considered.
c. It definitely gives the optimal solution always.
19. What are the drawbacks of dynamic programming?
a. Time and space requirements are high, since storage is needed for all level.
b. Optimality should be checked at all levels.
20. Write the general procedure of dynamic programming.
The development of dynamic programming algorithm can be broken into a sequence of 4 steps.
a. Characterize the structure of an optimal solution.
b. Recursively define the value of the optimal solution.
c. Compute the value of an optimal solution in the bottom-up fashion.
d. Construct an optimal solution from the computed information.
21. Define principle of optimality.
It states that an optimal sequence of decisions has the property that whenever the initial
stage ordecisions must constitute an optimal sequence with regard to stage resulting from the first
decision.
22. Write the difference between the Greedy method and Dynamic programming.
a. Greedy method
1. Only one sequence of decision is generated.
2. It does not guarantee to give an optimal solution always.
 Dynamic programming
1. Many number of decisions are generated.
2. It definitely gives an optimal solution always.
23. What is greedy technique?
Greedy technique suggests a greedy grab of the best alternative available in the hope that a sequence of
locally optimal choices will yield a globally optimal solution to the entire problem. The choice must be
made as follows
a. Feasible : It has to satisfy the problem’s constraints
b. Locally optimal : It has to be the best local choice among all feasible choices available
on that step.
c. Irrevocable : Once made, it cannot be changed on a subsequent step of the algorithm
24. Write any two characteristics of Greedy Algorithm?
a. To solve a problem in an optimal way construct the solution from given set of
candidates.As the algorithm proceeds, two other sets get accumulated among this
one set contains the candidates that have been already considered and chosen while
the other set contains the candidates that have been considered but rejected.
25. What is the Greedy choice property?
a. The first component is greedy choice property (i.e.) a globally optimal solution
can arrive at bymaking a locally optimal choice.
b. The choice made by greedy algorithm depends on choices made so far but it cannot
depend on anyfuture choices or on solution to the sub problem.
c. It progresses in top down fashion.
26. What is greedy method?
Greedy method is the most important design technique, which makes a choice that looks best at that
moment. A given ‗n‘ inputs are required us to obtain a subset that satisfies some constraints that is the
feasible solution. A greedy method suggests that one can device an algorithm that works in stages
considering one input at a time.
27. What are the steps required to develop a greedy algorithm?
a. Determine the optimal substructure of the problem.
b. Develop a recursive solution.
c. Prove that at any stage of recursion one of the optimal choices is greedy choice.
Thus it is alwayssafe to make greedy choice.
d. Show that all but one of the sub problems induced by having made the greedy choice
are empty.
e. Develop a recursive algorithm and convert into iterative algorithm.
28. What is greedy technique?
a. Greedy technique suggests a greedy grab of the best alternative available in the hope
that a sequence of locally optimal choices will yield a globally optimal solution to the
entire problem. The choice must be made as follows.
b. Feasible: It has to satisfy the problem‘s constraints.
c. Locally optimal: It has to be the best local choice among all feasible choices available on
that step.
d. Irrevocable : Once made, it cannot be changed on a subsequent step of the algorithm
29. What are the labels in Prim’s algorithm used for?
Prim‘s algorithm makes it necessary to provide each vertex not in the current tree with the information
about the shortest edge connecting the vertex to a tree vertex. The information is provided by attaching
two labels to a vertex.
a. The name of the nearest tree vertex.
b. The length of the corresponding edge
30. How are the vertices not in the tree split into?
The vertices that are not in the tree are split into two sets
a. Fringe : It contains the vertices that are not in the tree but are adjacent to atleast one
tree vertex.
b. Unseen : All other vertices of the graph are called unseen because they are yet to be
affected by the algorithm.
31. What are the operations to be done after identifying a vertex u* to be added to the tree?
After identifying a vertex u* to be added to the tree, the following two operations need to be performed
a. Move u* from the set V-VT to the set of tree vertices VT.
b. For each remaining vertex u in V-VT that is connected to u* by a shorter edge than the
u‘s current distance label, update its labels by u* and the weight of the edge between
u* and u, respectively.
32. What is the use of Dijksra’s algorithm?
Dijkstra‘s algorithm is used to solve the single-source shortest-paths problem: for a given vertex called
the source in a weighted connected graph, find the shortest path to all its other vertices. The single-
source shortest-paths problem asks for a family of paths, each leading from the source to a different
vertex in the graph, though some paths may have edges in common.
33. Define Spanning tree.
Spanning tree of a connected graph G: a connected acyclic subgraph of G that includes all of G‘s
vertices
34. What is minimum spanning tree.
Minimum spanning tree of a weighted, connected graph G: a spanning tree of G of the minimum
total weight
35. List the advantages of binary search?
 Less time is consumed
 The processing speed is fast
 The number of iterations is less. It take n/2 iterations.
 Binary search, which is used in Fibonacci Series, involves addition and
subtraction rather than division
 It is priori analysis, since it can be analyzed before execution.
36. Explain the principle used quick sort?
It is a partition method using the particular key the given table is partitioned into 2 sub
tables so that first, the original key will be its position the sorted sequence and secondly, all keys to
the left of this key will be less value and all keys to the right of it will be greater values
37. What is binary search?
The binary search algorithm some of the most efficient searching techniques
whichrequires the list to be sort descending order.
To search for an amount of the list, the binary search algorithms split the list and locate
the middle element of the list.
First compare middle key K1, with given key K . If K1=K then the element is found.
38. What are the objectives of sorting algorithm?
a. To rearrange the items of a given list
b. To search an element in the list.
39. Define prim’s algorithm.
Prim’s algorithm is greedy and efficient algorithm, which is used to find the minimum
spanning tree of weighted connected graph
40. How efficient is prim’s algorithm?
The efficiency of the prim’s algorithm depends on data structure chosen for the graph.
41. What is path compression?
The better efficiency can be obtained by combining either variation of quick union with
pathcompression. Path compression makes every node encountered during the execution of a find
operation point to the tree’s node.
42. Define Dijkstra’s Algorithm?
Dijkstra’s algorithm solves the single source shortest path problem of finding shortest
paths from a given vertex( the source), to all the other vertices of a weighted graph or digraph.
Dijkstra’s algorithm provides a correct solution for a graph with non negative weights.
UNIT-4 MCQ’S
1. What is the advantage of recursive approach than an iterative approach?
a) Consumes less memory
b) Less code and easy to implement
c) Consumes more memory
d) More code has to be written
2. Given an input arr = {2,5,7,99,899}; key = 899; What is the level of recursion?
a) 5
b) 2
c) 3
d) 4
3. Given an array arr = {45,77,89,90,94,99,100} and key = 99; what are the mid values(corresponding
array elements) in the first and second levels of recursion?
a) 90 and 99
b) 90 and 94
c) 89 and 99
d) 89 and 94
4. What is the worst case complexity of binary search using recursion?
a) O(nlogn)
b) O(logn)
c) O(n)
d) O(n2)
5. What is the average case time complexity of binary search using recursion?
a) O(nlogn)
b) O(logn)
c) O(n)
d) O(n2)
6. Which of the following is not an application of binary search?
a) To find the lower/upper bound in an ordered sequence
b) Union of intervals
c) Debugging
d) To search in unordered list
7. Which of the following sorting algorithms is the fastest?
a) Merge sort
b) Quick sort
c) Insertion sort
d) Shell sort
8. What is the worst case time complexity of a quick sort algorithm?
a) O(N)
b) O(N log N)
c) O(N2)
d) O(log N)
9. Which is the safest method to choose a pivot element?
a) choosing a random element as pivot
b) choosing the first element as pivot
c) choosing the last element as pivot
d) median-of-three partitioning method
10. Which of the following sorting algorithms is used along with quick sort to sort the sub arrays?
a) Merge sort
b) Shell sort
c) Insertion sort
d) Bubble sort
11. Which of the following method is used for sorting in merge sort?
A) merging
B) partitioning
C) selection
D) exchanging
12. What will be the best case time complexity of merge sort?
A) O(n log n)
B) O(n2)
C) O(n2 log n)
D) O(n log n2)
13. The Knapsack problem is an example of ____________
a) Greedy algorithm
b) 2D dynamic programming
c) 1D dynamic programming
d) Divide and conquer
14. Which of the following methods can be used to solve the Knapsack problem?
a) Brute force algorithm
b) Recursion
c) Dynamic programming
d) Brute force, Recursion and Dynamic Programming
15. You are given a knapsack that can carry a maximum weight of 60. There are 4 items with weights
{20, 30, 40, 70} and values {70, 80, 90, 200}. What is the maximum value of the items you can
carry using the knapsack?
a) 160
b) 200
c) 170
d) 90
16. Dijkstra’s Algorithm is used to solve _____________ problems.
a) All pair shortest path
b) Single source shortest path
c) Network flow
d) Sorting
17. Which of the following is the most commonly used data structure for implementing Dijkstra’s
Algorithm?
a) Max priority queue
b) Stack
c) Circular queue
d) Min priority queue
18. What is the time complexity of Dijikstra’s algorithm?
a) O(N)
b) O(N3)
c) O(N2)
d) O(logN)
19. Dijkstra’s Algorithm cannot be applied on ______________
a) Directed and weighted graphs
b) Graphs having negative weight function
c) Unweighted graphs
d) Undirected and unweighted graphs
20. How many times the insert and extract min operations are invoked per vertex?
a) 1
b) 2
c) 3
d) 0

You might also like