ADS & A Unit-4 Study Material
ADS & A Unit-4 Study Material
ADS & A Unit-4 Study Material
General Method:
Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a dispute on a
huge input, break the input into minor pieces, decide the problem on each of the small pieces, and then
merge the piecewise solutions into a global solution. This mechanism of solving the problem is called the
Divide & Conquer Strategy.
Divide and Conquer algorithm consists of aargument using the following three steps.
Applications:
The specific computer algorithms are based on the Divide & Conquer approach is
1. Relational Formula
2. Stopping Condition
1. Relational Formula: It is the formula that we generate from the given technique. After generation of
Formula we apply D&C Strategy, i.e. we break the problem recursively & solve the broken subproblems.
2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we need to know
that for how much time, we need to apply divide & Conquer. So the condition where the need to stop our
recursion steps of D&C is called as Stopping Condition.
Following algorithms are based on the concept of the Divide and Conquer Technique:
1. Binary Search: The binary search algorithm is a searching algorithm, which is also called a half-interval
search or logarithmic search. It works by comparing the target value with the middle element existing
in a sorted array. After making the comparison, if the value differs, then the half that cannot contain
the target will eventually eliminate, followed by continuing the search on the other half. We will again
consider the middle element and compare it with the target value. The process keeps on repeating
until the target value is met. If we found the other half to be empty after ending the search, then it
can be concluded that the target is not present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-exchange sort. It
starts by selecting a pivot value from an array followed by dividing the rest of the array elements into
two sub-arrays. The partition is made by comparing each of the elements with the pivot value. It
compares whether the element holds a greater value or lesser value than the pivot and then sort the
arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts by dividing an
array into sub-array and then recursively sorts each of them. After the sorting is done, it merges them
back.
4. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after Volker
Strassen. It has proven to be much faster than the traditional algorithm when works on large matrices.
o Divide and Conquer tend to successfully solve one of the biggest problems, such as the Tower of
Hanoi, a mathematical puzzle. It is challenging to solve complicated problems for which you have no
basic idea, but with the help of the divide and conquer approach, it has lessened the effort as it works
on dividing the main problem into two halves and then solve them recursively. This algorithm is much
faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves simple sub-
problems within the cache memory instead of accessing the slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification and is handled by
systems incorporating parallel processing.
o Since most of its algorithms are designed by incorporating recursion, so it necessitates high memory
management.
o An explicit stack may overuse the space.
o It may even crash the system if the recursion is performed rigorously greater than the stack present in
the CPU.
Binary search is the search technique that works efficiently on sorted lists. Hence, to search an element
into some list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into two halfs, and the item
is compared with the middle element of the list. If the match is found then, the location of the middle
element is returned. Otherwise, we search into either of the hlaf depending upon the result produced
through the match.
NOTE: Binary search can be implemented on sorted array elements. If the list elements are not arranged in
a sorted manner, we have first to sort them.
Algorithm
Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is the index of the fi
rst array element, 'upper_bound' is the index of the last array element, 'val' is the value to search
To understand the working of the Binary search algorithm, let's take a sorted array. It will be easy to
understand the working of Binary search with an example.
o Iterative method
o Recursive method
The recursive method of binary search follows the divide and conquer approach.
We have to use the below formula to calculate the mid of the array -
beg = 0
end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.
Now, the element to search is found. So algorithm will return the index of the element matched.
Now, let's see the time complexity of Binary search in the best case, average case, and worst case. We will
also see the space complexity of Binary search.
1. Time Complexity
o Best Case Complexity - In Binary search, best case occurs when the element to search is found in first
comparison, i.e., when the first middle element itself is the element to be searched. The best-case
time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep reducing the
search space till it has only one element. The worst-case time complexity of Binary search is O(logn).
2. Space Complexity
Space Complexity O(1)
Quick sort
Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between search that
each element in left sub array is less than or equal to the average element and each element in the right sub-
array is larger than the middle element.
Algorithm:
QUICKSORT (array A, int m, int n)
if (n > m)
then
i ← a random index from [m,n]
swap A [i] with A[m]
o ← PARTITION (A, m, n)
QUICKSORT (A, m, o - 1)
QUICKSORT (A, o + 1, n)
Partition Algorithm:
Rules: Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and
partitions the given array around the picked pivot. There are many different versions of quickSort that pick
pivot in different ways.
1. Always pick first element as pivot.
2. Always pick last element as pivot (implemented below)
3. Pick a random element as pivot.
4. Pick median as pivot.
Example of Quick Sort:
1. 44 33 11 55 77 90 40 60 99 22 88
Let 44 be the Pivot element and scanning done from right to left
Comparing 44 to the right-side elements, and if right-side elements are smaller than 44, then swap it. As 22 is
smaller than 44 so swap them.
22 33 11 55 77 90 40 60 99 44 88
Now comparing 44 to the left side element and the element must be greater than 44 then swap them.
As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55 88
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 & one right from
pivot element.
22 33 11 40 77 90 44 60 99 55 88
22 33 11 40 44 90 77 60 99 55 88
Now, the element on the right side and left side are greater than and smaller than 44 respectively.
And these sublists are sorted under the same process as above done.
Merging Sublists:
SORTED LISTS
2. Merge sort:
Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to sort the
elements. It is one of the most popular and efficient sorting algorithm. It divides the given list into two equal
halves, calls itself for the two halves and then merges the two sorted halves. We have to define
the merge() function to perform the merging.
The sub-lists are divided again and again into halves until the list cannot be divided further. Then we combine
the pair of one element lists into two-element lists, sorting them in the process. The sorted two-element
pairs is merged into the four-element lists, and so on until we get the sorted list.
Algorithm
In the following algorithm, arr is the given array, beg is the starting element, and end is the last element of
the array.
The important part of the merge sort is the MERGE function. This function performs the merging of two
sorted sub-arrays that are A[beg…mid] and A[mid+1…end], to build one sorted array A[beg…end]. So, the
inputs of the MERGE function are A[], beg, mid, and end.
To understand the working of the merge sort algorithm, let's take an unsorted array. It will be easier to
understand the merge sort via an example.
According to the merge sort, first divide the given array into two equal halves. Merge sort keeps dividing the
list into equal parts until it cannot be further divided.
As there are eight elements in the given array, so it is divided into two arrays of size 4.
Now, again divide these two arrays into halves. As they are of size 4, so divide them into new arrays of size 2.
Now, again divide these arrays to get the atomic value that cannot be further divided.
In combining, first compare the element of each array and then combine them into another array in sorted
order.
So, first compare 12 and 31, both are in sorted positions. Then compare 25 and 8, and in the list of two
values, put 8 first followed by 25. Then compare 32 and 17, sort them and put 17 first followed by 32. After
that, compare 40 and 42, and place them sequentially.
In the next iteration of combining, now compare the arrays with two data values and merge them into an
array of found values in sorted order.
Now, there is a final merging of the arrays. After the final merging of above arrays, the array will look like -
Now, the array is completely sorted.
A, B and C are
the square matrices of size N
a,b,c and d are the submatrices of A of size N/2xN/2
In the above method, we do 8 multiplications for matrices of size N/2 x N/2 and 4 additions. Addition of
two matrices takes O(N2) time. So the time complexity can be written as
T(N) = 8T(N/2) + O(N 2)
Simple Divide and Conquer also leads to O(N3), can there be a better way?
In the above divide and conquer method, the main component for high time complexity is 8 recursive calls.
The idea of Strassen’s method is to reduce the number of recursive calls to 7. Strassen’s method is similar
to above simple divide and conquer method in the sense that this method also divide matrices to sub-
matrices of size N/2 x N/2 as shown in the above diagram, but in Strassen’s method, the four sub-matrices
of result are calculated using following formulae.
Addition and Subtraction of two matrices takes O(N 2) time. So time complexity can be written as
Greedy Algorithm
The greedy method is one of the strategies like Divide and conquer used to solve the problems. This method
is used for solving optimization problems. An optimization problem is a problem that demands either
maximum or minimum results. Let's understand through some terms.
The Greedy method is the simplest and straightforward approach. It is not an algorithm, but it is a technique.
The main function of this approach is that the decision is taken on the basis of the currently available
information. Whatever the current information is present, the decision is made without worrying about the
effect of the current decision in future.
This technique is basically used to determine the feasible solution that may or may not be optimal. The
feasible solution is a subset that satisfies the given criteria. The optimal solution is the solution which is the
best and the most favorable solution in the subset. In the case of feasible, if more than one solution satisfies
the given criteria then those solutions will be considered as the feasible, whereas the optimal solution is the
best solution among all the solutions.
Skip Ad
o To construct the solution in an optimal way, this algorithm creates two sets where one set contains all
the chosen items, and another set contains the rejected items.
o A Greedy algorithm makes good local choices in the hope that the solution should be either feasible or
optimal.
o Candidate set: A solution that is created from the set is known as a candidate set.
o Selection function: This function is used to choose the candidate or subset which can be added in the
solution.
o Feasibility function: A function that is used to determine whether the candidate or subset can be
used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or the partial solution.
o Solution function: This function is used to intimate whether the complete function has been reached
or not.
The above is the greedy algorithm. Initially, the solution is assigned with zero value. We pass the array and
number of elements in the greedy algorithm. Inside the for loop, we select the element one by one and
checks whether the solution is feasible or not. If the solution is feasible, then we perform the union.
P:A→B
The problem is that we have to travel this journey from A to B. There are various solutions to go from A to B.
We can go from A to B by walk, car, bike, train, aeroplane, etc. There is a constraint in the journey that we
have to travel this journey within 12 hrs. If I go by train or aeroplane then only, I can cover this distance
within 12 hrs. There are many solutions to this problem but there are only two solutions that satisfy the
constraint.
If we say that we have to cover the journey at the minimum cost. This means that we have to travel this
distance as minimum as possible, so this problem is known as a minimization problem. Till now, we have two
feasible solutions, i.e., one by train and another one by air. Since travelling by train will lead to the minimum
cost so it is an optimal solution. An optimal solution is also the feasible solution, but providing the best result
so that solution is the optimal solution with the minimum cost. There would be only one optimal solution.
The problem that requires either minimum or maximum result then that problem is known as an optimization
problem. Greedy method is one of the strategies used for solving the optimization problems.
Greedy algorithm makes decisions based on the information available at each phase without considering the
broader problem. So, there might be a possibility that the greedy solution does not give the best solution for
every problem.
It follows the local optimum choice at each stage with a intend of finding the global optimum. Let's
understand through an example.
We have to travel from the source to the destination at the minimum cost. Since we have three feasible
solutions having cost paths as 10, 20, and 5. 5 is the minimum cost path so it is the optimal solution. This is
the local optimum, and in this way, we find the local optimum at each stage in order to calculate the global
optimal solution.
Applications:
1. Knapsack Problem
2. Job Sequencing with Deadlines
3. Minimum cost spanning tree
4. Single source shortest path
1. Knapsack Problem
The Knapsack Problem is a famous Dynamic Programming Problem that falls in the optimization category.
It derives its name from a scenario where, given a set of items with specific weights and assigned values, the
goal is to maximize the value in a knapsack while remaining within the weight constraint.
Each item can only be selected once, as we don’t have multiple quantities of any item.
In the below example, the weights of different honeycombs and the values associated with them are
provided. The goal is to maximize the value of honey that can be fit in the bear’s knapsack.
Example
Let’s take the example of Mary, who wants to carry some fruits in her knapsack and maximize the profit she
makes. She should pick them such that she minimizes weight ( <= bag's<=bag′s capacitycapacity)
and maximizes value.
Here are the weights and profits associated with the different fruits:
Weights: { 2, 3, 1, 4 }
Profits: { 4, 5, 3, 7 }
Knapsack Capacity: 5
Banana + Melon
💰Profit = 10
Banana and Melon is the best combination, as it gives us the maximum profit (10) and the total weight does
not exceed the knapsack’s capacity (5).
The problem can be tackled using various approaches: brute force, top-down with
memoization and bottom-up are all potentially viable approaches to take.
The latter two approaches (top-down with memoization and bottom-up) make use of Dynamic Programming.
In more complex situations, these would likely be the much more efficient approaches to use.
Job J1 J2 J3 J4 J5
Deadline 2 1 3 2 1
Profit 60 100 20 40 20
Solution
To solve this problem, the given jobs are sorted according to their profit in a descending order. Hence, after
sorting, the jobs are ordered as shown in the following table.
Job J2 J1 J4 J3 J5
Deadline 1 2 2 3 1
Profit 100 60 40 20 20
From this set of jobs, first we select J2, as it can be completed within its deadline and contributes maximum
profit.
Next, J1 is selected as it gives more profit compared to J4.
In the next clock, J4 cannot be selected as its deadline is over, hence J3 is selected as it executes
within its deadline.
The job J5 is discarded as it cannot be executed within its deadline.
Thus, the solution is the sequence of jobs (J2, J1, J3), which are being executed within their deadline and gives
maximum profit.
Total profit of this sequence is 100 + 60 + 20 = 180.
Greedy Algorithm-
Greedy Algorithm is adopted to determine how the next job is selected for an optimal solution.
The greedy algorithm described below always gives an optimal solution to the job sequencing problem-
Step-01:
Step-02:
Step-03:
Jobs J1 J2 J3 J4 J5 J6
Deadlines 5 3 3 2 4 2
Jobs J4 J1 J3 J2 J5 J6
Deadlines 2 5 3 3 4 2
Step-02:
Step-03:
Step-04:
We take job J1.
Since its deadline is 5, so we place it in the first empty cell before deadline 5 as-
Step-05:
Step-06:
We take job J2.
Since its deadline is 3, so we place it in the first empty cell before deadline 3.
Since the second and third cells are already filled, so we place job J2 in the first cell as-
Step-07:
Now,
The only job left is job J6 whose deadline is 2.
All the slots before deadline 2 are already occupied.
Thus, job J6 can not be completed.
Part-01:
Part-02:
All the jobs are not completed in optimal schedule.
This is because job J6 could not be completed within its deadline.
Part-03:
A spanning tree is a subset of Graph G, which has all the vertices covered with minimum possible number of
edges. Hence, a spanning tree does not have cycles and it cannot be disconnected..
By this definition, we can draw a conclusion that every connected and undirected Graph G has at least one
spanning tree. A disconnected graph does not have any spanning tree, as it cannot be spanned to all its
vertices.
We found three spanning trees off one complete graph. A complete undirected graph can have maximum nn-
2
number of spanning trees, where n is the number of nodes. In the above addressed example, n is
3, hence 33−2 = 3 spanning trees are possible.
1. Kruskal’s Algorithm
Kruskal's Algorithm is used to find the minimum spanning tree for a connected weighted graph. The main
target of the algorithm is to find the subset of edges by using which we can traverse every vertex of the
graph. It follows the greedy approach that finds an optimum solution at every stage instead of focusing on a
global optimum.
In Kruskal's algorithm, we start from edges with the lowest weight and keep adding the edges until the goal is
reached. The steps to implement Kruskal's algorithm are listed as follows -
Now, let's see the working of Kruskal's algorithm using an example. It will be easier to understand Kruskal's
algorithm using an example.
Edge AB AC AD AE BC CD DE
Weight 1 7 10 5 3 4 2
Now, sort the edges given above in the ascending order of their weights.
Edge AB DE BC CD AE AC AD
Weight 1 2 3 4 5 7 10
Step 3 - Add the edge BC with weight 3 to the MST, as it is not creating any cycle or loop.
Step 4 - Now, pick the edge CD with weight 4 to the MST, as it is not forming the cycle.
Step 5 - After that, pick the edge AE with weight 5. Including this edge will create the cycle, so discard it.
Step 6 - Pick the edge AC with weight 7. Including this edge will create the cycle, so discard it.
Step 7 - Pick the edge AD with weight 10. Including this edge will also create the cycle, so discard it.
So, the final minimum spanning tree obtained from the given weighted graph by using Kruskal's algorithm is -
The cost of the MST is = AB + DE + BC + CD = 1 + 2 + 3 + 4 = 10.
Now, the number of edges in the above tree equals the number of vertices minus 1. So, the algorithm stops
here.
o Time Complexity
The time complexity of Kruskal's algorithm is O(E logE) or O(V logV), where E is the no. of edges, and V
is the no. of vertices.
2. Prim’s Algorithm
Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree from a graph. Prim's
algorithm finds the subset of edges that includes every vertex of the graph such that the sum of the weights
of the edges can be minimized.
Prim's algorithm starts with the single node and explores all the adjacent nodes with all the connecting edges
at every step. The edges with the minimal weights causing no cycles in the graph got selected.
Prim's algorithm is a greedy algorithm that starts from one vertex and continue to add the edges with the
smallest weight until the goal is reached. The steps to implement the prim's algorithm are given as follows -
Now, let's see the working of prim's algorithm using an example. It will be easier to understand the prim's
algorithm using an example.
Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.
Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two edges from vertex B
that are B to C with weight 10 and edge B to D with weight 4. Among the edges, the edge BD has the
minimum weight. So, add it to the MST.
Step 3 - Now, again, choose the edge with the minimum weight among all the other edges. In this case, the
edges DE and CD are such edges. Add them to MST and explore the adjacent of C, i.e., E and A. So, select the
edge DE and add it to the MST.
Step 4 - Now, select the edge CD, and add it to the MST.
Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a cycle to the graph.
So, choose the edge CA and add it to the MST.
So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of the MST is
given below -
Now, let's see the time complexity of Prim's algorithm. The running time of the prim's algorithm depends
upon using the data structure for the graph and the ordering of edges. Below table shows some choices -
o Time Complexity
Data structure used for the minimum edge weight Time Complexity
Adjacency matrix, linear searching O(|V|2)
Prim's algorithm can be simply implemented by using the adjacency matrix or adjacency list graph
representation, and to add the edge with the minimum weight requires the linearly searching of an array of
weights. It requires O(|V|2) running time. It can be improved further by using the implementation of heap to
find the minimum weight edges in the inner loop of the algorithm.
Introduction:
In a shortest- paths problem, we are given a weighted, directed graphs G = (V, E), with weight function w: E
→ R mapping edges to real-valued weights. The weight of path p = (v 0,v1,..... vk) is the total of the weights of
its constituent edges:
We define the shortest - path weight from u to v by δ(u,v) = min (w (p): u→v), if there is a path from u to v,
and δ(u,v)= ∞, otherwise.
The shortest path from vertex s to vertex t is then defined as any path p with weight w (p) = δ(s,t).
The breadth-first- search algorithm is the shortest path algorithm that works on unweighted graphs, that is,
graphs in which each edge can be considered to have unit weight.
In a Single Source Shortest Paths Problem, we are given a Graph G = (V, E), we want to find the shortest path
from a given source vertex s ∈ V to every vertex v ∈ V.
Variants:
o Single- destination shortest - paths problem: Find the shortest path to a given destination vertex t
from every vertex v. By shift the direction of each edge in the graph, we can shorten this problem to a
single - source problem.
o Single - pair shortest - path problem: Find the shortest path from u to v for given vertices u and v. If
we determine the single - source problem with source vertex u, we clarify this problem also.
Furthermore, no algorithms for this problem are known that run asymptotically faster than the best
single - source algorithms in the worst case.
o All - pairs shortest - paths problem: Find the shortest path from u to v for every pair of vertices u and
v. Running a single - source algorithm once from each vertex can clarify this problem; but it can
generally be solved faster, and its structure is of interest in the own right.
If some path from s to v contains a negative cost cycle then, there does not exist the shortest path.
Otherwise, there exists a shortest s - v that is simple.
Dijkstra’s Algorithm is also known as Single Source Shortest Path (SSSP) problem. It is used to find the
shortest path from source node to destination node in graph.
The graph is widely accepted data structure to represent distance map. The distance between cities
effectively represented using graph.
Dijkstra proposed an efficient way to find the single source shortest path from the weighted graph.
For a given source vertex s, the algorithm finds the shortest path to every other vertex v in the graph.
Assumption : Weight of all edges is non-negative.
Steps of the Dijkstra’s algorithm are explained here:
1. Initializes the distance of source vertex to zero and remaining all other vertices to infinity.
2. Set source node to current node and put remaining all nodes in the list of unvisited vertex list. Compute
the tentative distance of all immediate neighbour vertex of the current node.
3. If the newly computed value is smaller than the old value, then update it.
For example, C is the current node, whose distance from source S is dist (S, C) = 5.
d(S, N) = 11
d(S, N) = 7
d(S, C) + d(C, N) < d(S, N) ⇒ Relax edge (S,
d(S, C) + d(C, N) > d(S, N) ⇒ Don’t
N)
update d(S, N)
Update d(S, N) = 8
Weight updating in Dijkstra’s algorithm
4. When all the neighbours of a current node are explored, mark it as visited. Remove it from unvisited
vertex list. Mark the vertex from unvisited vertex list with minimum distance and repeat the procedure.
5. Stop when the destination node is tested or when unvisited vertex list becomes empty.
Examples
Problem: Suppose Dijkstra’s algorithm is run on the following graph, starting at node A
1. Draw a table showing the intermediate distance values of all the nodes at each iteration of the
algorithm.
2. Show the final shortest path tree.
Solution:
Initialization:
dist[source] = 0 ⇒dist[A] = 0
Vertex u A B C D E F G H
dist[u] 0 ∞ ∞ ∞ ∞ ∞ ∞ ∞
Adjacent[A] = {B, E, F}
=0+1
=1
=0+4
=4
=0+8
=8
Vertex u A B C D E F G H
dist[u] 0 1 ∞ ∞ 4 8 ∞ ∞
Adjacent[B] = {C, F, G}
=1+2
=3
=1+6
=7
=1+6
=7
Vertex u A B C D E F G H
dist[u] 0 1 3 ∞ 4 7 7 ∞
=3+1
=4
=3+2
=5
dist[u] 0 1 3 4 4 7 5 ∞
Adjacent[E] = {F}
=4+5
=9
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 7 5 ∞
Adjacent[D] = {G, H}
=4+1
=5
=4+4
=8
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 7 5 8
π [u] NIL A B C A B D D
Iteration 6:
Adjacent[G] = { F, H }
=5+1
=6
=5+1
=6
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 6 5 6
π [u] NIL A B C A G C G
Iteration 7:
Adjacent[F] = { }
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 6 5 6
p[u] NIL A B C A G C G
Iteration 8:
Adjacent[H] = { }
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 6 5 6
p[u] NIL A B C A G C G
We can easily derive the shortest path tree for given graph from above table. In the table, p[u] indicates the
parent node of vertex u. The shortest path tree is shown in following figure
10 Marks Questions
1. Explain in detail quick sorting method? Provide a complete analysis of quick sort? May – 2020, Dec-
2018
2. Describe and write quick sort algorithm. Show how quick sort sorts the following sequence ofkeys
310, 285, 179, 652, 351, 423, 861, 254, 450, 520. Analyze time complexity of thealgorithm.?Dec-2017
3. What are greedy algorithms? What are the characteristics? Explain any greedy algorithm with
example? May – 2020, May-2017
4. Whataregreedyalgorithms?Whataretheircharacteristics?Explainanygreedyalgorithmwithexample.?
Dec-2018
5. Sort the list 415,213,700,515,712,715 using merge sort algorithm? Also explain the time complexity of
merge sort algorithm? May-2019, May-2016
6. Explain thealgorithm of mergesort. Compute thetimecomplexity of mergesort.
Alsosortthelist415,213,700,515,712,715usingmerge sort.? Dec-2016
7. Explain Strassen's algorithm for matrix multiplication with the help of anexample. May-2019, May-
2017, Dec-2016
8. Discussthestrassen’smatrixmultiplicationalgorithmindetail.Also,giveillustrativeexample toexplainthe
efficiencyachievedthroughthisalgorithm.? Dec-2018
b. Greedyalgorithm
10. Using Dijkstra’s algorithm find the shortest path from A to D for the following graph. May-2019, May-
2017
7
B C
2 3
2 3
A E 2 D
F
6 2
1
G 2
4
H
11. Describe Dijkastra' s algorithm to solve single-sourceshortest path problem. What is its
time complexity?Dec-2017
12. Write an algorithm based on divide-and-conquer strategy to search an element in a given list. Assume
that the elements of list are in sorted order. May-2018, Dec-2017
13. Define spanning tree. Write Kruskal’s algorithm for finding minimum cost spanning tree. Describe
how Kruskal’s algorithm is different from Prim’s algorithm for finding minimum cost spanningtree.
May-2018, Dec-2019,Dec-2017
14. Suppose we use Dijkastra’s greedy, single source shortest path algorithm on an unidirectional graph.
What constraint must we have for the algorithm to work and why?
15. Compare the various programming paradigms such as divide-and-conquer, dynamic programming
and greedyapproach. May-2018
16. Explain finding maximum and minimum using divide and conquer with suitable example?
Weaknesses