DAA-Important QNA's
DAA-Important QNA's
DAA-Important QNA's
For the given set of numbers { 10, 16, 18, 23, 34, 68, 76, 88} apply the binary search with
executional steps to find position of key elements 18, 5 and 88
Sure, I can walk you through the binary search algorithm for the given set of numbers {10,
16, 18, 23, 34, 68, 76, 88} to find the position of key elements 18, 5, and 88.
Binary search is a search algorithm that works by repeatedly dividing the search interval in
half. The idea is to start in the middle of the array, compare the middle element with the
target element, and determine which half of the array to search in next. The search
continues on the selected half of the array, and the process repeats until the target element
is found or it is determined that the element is not in the array.
Here are the steps for finding the position of key elements using binary search:
1. Sort the given set of numbers in ascending order if it is not already sorted. The sorted set
of numbers is {10, 16, 18, 23, 34, 68, 76, 88}.
2. Find the middle element of the array by calculating the average of the indices of the first
and last elements.
middle = (0 + 7) / 2 = 3
a. If the target element is equal to the middle element, return the index of the middle
element as the position of the target element.
b. If the target element is less than the middle element, search the left half of the array.
c. If the target element is greater than the middle element, search the right half of the
array.
4. Repeat steps 2 and 3 on the selected half of the array until the target element is found or
it is determined that the element is not in the array.
Let's now apply these steps to find the positions of key elements:
1. Key element: 18
Step 2:
middle = (0 + 7) / 2 = 3
18 < 23
Step 2:
middle = (0 + 2) / 2 = 1
Step 3:
18 > 16
Step 2:
middle = (2 + 2) / 2 = 2
Step 3:
18 = 18
2. Key element: 5
Step 2:
middle = (0 + 7) / 2 = 3
Step 3:
5 < 23
middle = (0 + 2) / 2 = 1
Step 3:
5 < 16
Step 2:
middle = (0 + 0) / 2 = 0
Step 3:
5 < 10
Sure, I can provide you with non-recursive and recursive algorithms to perform binary
search.
if array[mid] == target:
return mid
else:
end = mid - 1
return -1
```
The above algorithm performs binary search in a sorted array. It takes two inputs, an array
and a target element, and returns the index of the target element in the array if it is present,
otherwise it returns -1.
if array[mid] == target:
return mid
else:
return binary_search(array, target, start, mid-1)
```
3. Develop the algorithm that finds the minimum and maximum in a given set of numbers using
divide and conquer
Sure, I can provide you with an algorithm that finds the minimum and maximum in a given
set of numbers using divide and conquer.
The algorithm is based on the principle of divide and conquer, which means that we divide
the problem into smaller sub-problems and solve them independently. Here, we divide the
set of numbers into two parts, find the minimum and maximum in each part recursively, and
then combine the results to find the minimum and maximum of the entire set.
Algorithm to find the minimum and maximum in a given set of numbers using divide and
conquer:
1. If the set has only one element, return that element as both the minimum and maximum.
2. If the set has two elements, return the smaller element as the minimum and the larger
element as the maximum.
3. Divide the set into two equal parts.
4. Recursively find the minimum and maximum in each part.
5. Combine the results to find the minimum and maximum of the entire set.
- The minimum of the entire set is the minimum of the minimums of the two parts.
- The maximum of the entire set is the maximum of the maximums of the two parts.
6. Return the minimum and maximum of the entire set.
Pseudocode:
```
def find_min_max(numbers):
n = len(numbers)
# base case
if n == 1:
return numbers[0], numbers[0]
elif n == 2:
if numbers[0] < numbers[1]:
return numbers[0], numbers[1]
else:
return numbers[1], numbers[0]
Sure, I can apply the divide and conquer method to find the minimum and maximum in the
given set of numbers.
Using the algorithm I mentioned in my previous answer, we can find the minimum and
maximum of the given set of numbers as follows:
1. If the set has only one element, return that element as both the minimum and maximum.
2. If the set has two elements, return the smaller element as the minimum and the larger
element as the maximum.
3. Divide the set into two equal parts.
4. Recursively find the minimum and maximum in each part.
5. Combine the results to find the minimum and maximum of the entire set.
- The minimum of the entire set is the minimum of the minimums of the two parts.
- The maximum of the entire set is the maximum of the maximums of the two parts.
6. Return the minimum and maximum of the entire set.
1. The set has more than one element, so we proceed with the algorithm.
2. We divide the set into two parts: {56, 105, 76, 45} and {89, -5, 33, 67, 38}.
3. We recursively find the minimum and maximum in each part:
- For the first part, we divide it into {56, 105} and {76, 45} and recursively find the
minimum and maximum of each part:
- For {56, 105}, the minimum is 56 and the maximum is 105.
- For {76, 45}, the minimum is 45 and the maximum is 76.
- For the second part, we divide it into {89, -5, 33} and {67, 38} and recursively find the
minimum and maximum of each part:
- For {89, -5, 33}, the minimum is -5 and the maximum is 89.
- For {67, 38}, the minimum is 38 and the maximum is 67.
4. We combine the minimums and maximums of the two parts to find the minimum and
maximum of the entire set:
- The minimum of the entire set is the minimum of the minimums of the two parts, which
is -5.
- The maximum of the entire set is the maximum of the maximums of the two parts, which
is 105.
5. We return the minimum and maximum of the entire set, which are -5 and 105,
respectively.
Therefore, the minimum and maximum of the given set of numbers using the divide and
conquer method are -5 and 105, respectively.
5. Study the nature of two sets of numbers A={24, 55, 78} B={-5, 88} explore method and
develop the possibility of merging these two sets such that resultant is sorted set.
The two sets of numbers, A={24, 55, 78} and B={-5, 88}, are both small sets with few
elements. We can determine the nature of the sets by examining their properties.
Set A:
- Contains 3 elements
- All elements are positive
- Elements are in ascending order
- Maximum element: 78
- Minimum element: 24
Set B:
- Contains 2 elements
- One element is negative, the other is positive
- Elements are not in any particular order
- Maximum element: 88
- Minimum element: -5
To merge these two sets into a sorted set, we can use a simple algorithm:
1. Create an empty set to store the merged set.
2. Initialize two pointers, one for each set, pointing to the first element.
3. While both sets have elements, compare the elements pointed by the two pointers.
4. Add the smaller element to the merged set and advance the pointer of the set from which
the element was taken.
5. If one set has no more elements, add the remaining elements of the other set to the
merged set.
6. The merged set is the sorted set.
Therefore, we have successfully merged sets A and B into a sorted set using the algorithm
mentioned above.
6. Develop an algorithm that merges the two sorted arrays and assess its time complexity
The algorithm to merge two sorted arrays is a common operation in computer science and is
known as Merge Sort. Here's the algorithm to merge two sorted arrays A and B into a sorted
array C:
1. Create an empty array C with a size of (n+m), where n is the size of array A and m is the
size of array B.
2. Initialize pointers i, j, and k to 0, where i points to the first element of A, j points to the first
element of B, and k points to the first element of C.
3. While i is less than n and j is less than m, compare the elements of A and B pointed by i
and j, respectively.
a. If A[i] <= B[j], copy A[i] to C[k], and increment i and k by 1.
b. If A[i] > B[j], copy B[j] to C[k], and increment j and k by 1.
4. If there are remaining elements in A, copy them to C.
5. If there are remaining elements in B, copy them to C.
6. The merged array C is now sorted.
The time complexity of this algorithm is O(n+m), where n is the size of array A and m is the
size of array B. The algorithm iterates through the two arrays once, and the time complexity
is proportional to the size of the merged array.
Example:
Let's say we have two sorted arrays A={1, 3, 5, 7, 9} and B={2, 4, 6, 8, 10}.
Sure! Here are the steps to sort the given set A={56, 105, 76, 45, 89, -5, 33, 67, 38} using the
Merge Sort algorithm:
1. Divide the set A into two halves: left_A={56, 105, 76, 45} and right_A={89, -5, 33, 67, 38}.
2. Recursively apply Merge Sort algorithm to sort the left_A and right_A sets.
3. For the left_A set, divide it into two halves: left_left_A={56, 105} and right_left_A={76, 45}.
4. Recursively apply Merge Sort algorithm to sort the left_left_A and right_left_A sets.
5. For the right_A set, divide it into two halves: left_right_A={89, -5, 33} and
right_right_A={67, 38}.
6. Recursively apply Merge Sort algorithm to sort the left_right_A and right_right_A sets.
7. Merge the sorted left_left_A and right_left_A sets to form a sorted set A1={56, 45, 76,
105}.
8. Merge the sorted left_right_A and right_right_A sets to form a sorted set A2={-5, 33, 89,
38, 67}.
9. Merge the sorted A1 and A2 sets to form the final sorted set A={-5, 33, 38, 45, 56, 67, 76,
89, 105}.
Here's a graphical representation of the Merge Sort algorithm applied to the given set A:
```
[56, 105, 76, 45, 89, -5, 33, 67, 38]
/ \
[56, 105, 76, 45] [89, -5, 33, 67, 38]
/ \ / \
[56, 105] [76, 45] [89, -5, 33] [67, 38]
/ \ / \ / | \
[56] [105] [76] [45] [-5, 33] [89] [38] [67]
\ / \ / / \
[45, 56] [76, 105] [-5, 33, 89] [38, 67] |
/ \ \ /
[45, 56, 76, 105] [-5, 33, 38, 67, 89] [38, 67, 89]
\ /
[-5, 33, 38, 45, 56, 67, 76, 89, 105]
```
8. Develop the merge sort algorithm by using merge routine that merges the two sorted list in
sorted order derive the time complexity.
Sure, here is the Merge Sort algorithm using the merge routine to merge two sorted lists:
```
Algorithm MergeSort(A):
n = length(A)
if n < 2:
return A
else:
mid = floor(n/2)
left = A[1:mid]
right = A[mid+1:n]
left = MergeSort(left)
right = MergeSort(right)
return Merge(left, right)
The `MergeSort` algorithm recursively divides the input list `A` into two halves until each half
has only one element. Then, it calls the `Merge` routine to merge the two halves in sorted
order. The `Merge` routine takes two sorted lists as input, and merges them into a single
sorted list.
9. Develop an algorithm that places the pivot element in appropriate position of given set of
elements such that after sorting the position of pivot element is unchanged
Sure, here is an algorithm for placing the pivot element in the appropriate position of a given
set of elements such that the position of the pivot element is unchanged after sorting:
```
Algorithm Partition(A, start, end):
pivot = A[end] // choose the last element as pivot
i = start - 1 // initialize i to one less than start index
for j = start to end - 1:
if A[j] <= pivot:
i += 1
swap A[i] and A[j] // swap A[i] and A[j] if A[j] <= pivot
swap A[i+1] and A[end] // swap A[i+1] and A[end] to put pivot in the appropriate position
return i+1
```
The `Partition` algorithm takes an array `A`, a starting index `start`, and an ending index
`end`. It chooses the last element of the array as the pivot element and places it in the
appropriate position such that all the elements to the left of the pivot element are less than
or equal to the pivot, and all the elements to the right of the pivot element are greater than
the pivot. The algorithm returns the index of the pivot element.
The time complexity of the `Partition` algorithm is O(n), where n is the number of elements
in the array `A`. This is because the algorithm iterates over each element of the array once
and performs constant time operations such as comparisons and swaps.
10. Develop the quick sort algorithm by using partition routine that fixes the position of pivot
element.
Sure, here is the Quick Sort algorithm using the Partition routine to fix the position of the
pivot element:
```
Algorithm QuickSort(A, start, end):
if start < end:
pivot_index = Partition(A, start, end)
QuickSort(A, start, pivot_index-1) // sort the left subarray
QuickSort(A, pivot_index+1, end) // sort the right subarray
The `QuickSort` algorithm recursively sorts the input array `A` by choosing a pivot element,
partitioning the array around the pivot element, and recursively sorting the subarrays to the
left and right of the pivot. The `Partition` routine takes an array `A`, a starting index `start`,
and an ending index `end`. It chooses the last element of the array as the pivot element and
places it in the appropriate position such that all the elements to the left of the pivot
element are less than or equal to the pivot, and all the elements to the right of the pivot
element are greater than the pivot. The `Partition` routine returns the index of the pivot
element.
The time complexity of the `QuickSort` algorithm is O(nlogn) in the average case and O(n^2)
in the worst case, where n is the number of elements in the array `A`. In the average case,
the algorithm divides the input array into halves recursively and sorts them using the
`Partition` routine, which takes O(n) time. Since there are log(n) levels of recursion, the total
time complexity is O(nlogn). In the worst case, the input array is already sorted or almost
sorted, and the `Partition` routine chooses the last element as the pivot, resulting in a
partition that is significantly imbalanced. In this case, the time complexity of the algorithm
becomes O(n^2). However, this can be avoided by choosing a random pivot element or using
other techniques such as the median of three or the Tukey's ninther.
11. Explain the Knapsack problem that is having capacity of 15 units and there are 5 items having
profit of {12, 1 , 2, 2, 4} and weights as {4, 2, 2, 1, 10} fill up the knapsack with items that
maximizes the profit by adopting appropriate strategy of filling
The Knapsack problem is a classic optimization problem in computer science where we have
a knapsack of limited capacity and a set of items, each with a weight and a value. The goal is
to choose the items to put in the knapsack in such a way that the total weight does not
exceed the capacity of the knapsack, and the total value of the items chosen is maximized.
In the given problem, we have a knapsack of capacity 15 units and 5 items, each with a
weight and a profit:
To fill up the knapsack with items that maximize the profit, we can use the dynamic
programming approach. We can create a 2D array `dp` of size `(number of items + 1) x
(knapsack capacity + 1)` where `dp[i][j]` represents the maximum profit that can be obtained
by using the first `i` items with a knapsack capacity of `j`. We can initialize the first row and
column of the `dp` array to 0 as it represents the case of using 0 items or having a knapsack
capacity of 0.
We can then fill up the `dp` array row by row using the following recursive formula:
```
dp[i][j] = max(dp[i-1][j], dp[i-1][j-w[i]] + p[i])
```
where `w[i]` and `p[i]` represent the weight and profit of the `i`-th item, respectively. The
formula basically says that we can either exclude the `i`-th item and take the maximum profit
obtained by using the first `i-1` items with a knapsack capacity of `j`, or include the `i`-th item
and take the maximum profit obtained by using the first `i-1` items with a knapsack capacity
of `j-w[i]` plus the profit of the `i`-th item.
Once we have filled up the `dp` array, the maximum profit that can be obtained is `dp[5][15]`
which is equal to 15. To find out which items are included in the knapsack, we can backtrack
from `dp[5][15]` to `dp[1][0]` and check which items contributed to the maximum profit. In
this case, the items that contribute to the maximum profit are 1, 3, and 5 with a total weight
of 16 units and a total profit of 18.
Therefore, the appropriate strategy of filling the knapsack is to include items 1, 3, and 5
while leaving out items 2 and 4.
12. Explain the job scheduling with dead line problem. Applying the same solve for the following
jobs. Each job is represented with job Id, dead line of job and profit earned by job {(a, 2,
100), (b,1,19), (c, 2, 27), (d, 1, 25), (e, 3, 15)}.
The job scheduling with deadline problem is a classic optimization problem in computer
science where we are given a set of jobs with their respective deadlines and profits. Each job
has to be completed within its deadline, and we want to maximize the total profit earned by
completing the jobs.
In the job scheduling with deadline problem, we have a set of n jobs J = {1, 2, ..., n}, each
with a deadline di and a profit pi. We want to schedule the jobs in such a way that the total
profit is maximized, subject to the constraint that each job is completed before its deadline.
One possible algorithm to solve the job scheduling with deadline problem is as follows:
1. Sort the jobs in decreasing order of profit, i.e., pi > pj if i has higher profit than j.
2. Initialize an array of size equal to the maximum deadline and set all its elements to zero.
3. For each job in the sorted order:
a. Starting from its deadline, find the first available slot in the array that is not already
taken by a job.
b. If such a slot is found, assign the job to that slot and update the array.
c. If no such slot is found, skip the job.
4. The array now represents the schedule of jobs that can be completed within their
respective deadlines.
Using the above algorithm, let's solve the problem for the following set of jobs:
4. The final array represents the schedule of jobs that can be completed within their
respective deadlines:
- [d, b, a] with a total profit of 144.
Therefore, the optimal schedule of jobs that maximizes the profit subject to the deadline
constraint is [d, b, a], with a total profit of 144.
13. What is spanning tree? Draw the possible spanning tree for the given graph and graph is
represented in tuples as (u,v,wt). G= {(1,2,10), (2,3,6),(3,4,2), (2,4,5), (1,3,6), (1,4,3)}
A spanning tree of a connected, undirected graph is a subgraph that is a tree and connects all
the vertices together. In other words, it is a subset of edges that form a tree and connect all
the vertices of the original graph.
To find the possible spanning tree for the given graph G = {(1,2,10), (2,3,6),(3,4,2), (2,4,5),
(1,3,6), (1,4,3)}, we can use the Kruskal's algorithm or Prim's algorithm.
Kruskal's Algorithm:
1. Sort the edges in non-decreasing order of their weights.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If
cycle is not formed, include this edge. Else, discard it.
3. Repeat step 2 until there are (V-1) edges in the spanning tree, where V is the number of
vertices in the graph.
Prim's Algorithm:
1. Choose any vertex v as the starting vertex and initialize a minimum spanning tree with
only that vertex.
2. Grow the tree by adding the cheapest edge that connects the tree to a new vertex, but
doesn't form a cycle.
3. Repeat step 2 until all vertices are included in the tree.
Note that there can be multiple possible spanning trees for a given graph, but the number of
edges and total weight will be same in all of them.
14. Develop the algorithm to find minimum cost spanning tree as suggested by prims and apply
for the graph given in tuples as (u,v,wt). G= { (1,2,10), (2,3,6), (3,4,2), (2,4,5), (1,3,6), (1,4,3)}
Here is the algorithm to find the minimum cost spanning tree using Prim's algorithm:
1. Initialize a tree T with a single vertex, chosen arbitrarily from the graph.
2. Grow the tree T by one edge: of the edges that connect T to vertices not yet in T, find the
minimum-weight edge, and transfer it to T.
3. Repeat step 2 until all vertices are in T.
Now, let's apply this algorithm to the given graph G = {(1,2,10), (2,3,6), (3,4,2), (2,4,5),
(1,3,6), (1,4,3)}:
1. Choose an arbitrary starting vertex, say vertex 1, and add it to the tree T.
2. From the edges connected to vertex 1, the edge (1,4,3) has the smallest weight. Add this
edge to the tree T.
3. From the edges connected to vertices in T, the edge (3,4,2) has the smallest weight. Add
this edge to the tree T.
4. From the edges connected to vertices in T, the edge (2,3,6) has the smallest weight. Add
this edge to the tree T.
5. From the edges connected to vertices in T, the edge (2,4,5) has the smallest weight. Add
this edge to the tree T.
Now, all vertices are in T and the algorithm terminates. The minimum cost spanning tree is:
The algorithm to find minimum cost spanning tree using Kruskal's algorithm is as follows:
a. Consider the first edge, (3,4,2). Add it to the spanning tree: {(3,4,2)}
b. Consider the second edge, (1,4,3). Add it to the spanning tree: {(3,4,2), (1,4,3)}
c. Consider the third edge, (2,3,6). Add it to the spanning tree: {(3,4,2), (1,4,3), (2,3,6)}
d. Consider the fourth edge, (2,4,5). Add it to the spanning tree: {(3,4,2), (1,4,3), (2,3,6),
(2,4,5)}
e. Consider the fifth edge, (1,2,10). Add it to the spanning tree: {(3,4,2), (1,4,3), (2,3,6),
(2,4,5), (1,2,10)}
f. Consider the sixth edge, (1,3,6). Discard it, as including it in the spanning tree would
create a cycle.
4. Output the minimum cost spanning tree: {(3,4,2), (1,4,3), (2,3,6), (2,4,5), (1,2,10)}
Therefore, the minimum cost spanning tree for the given graph is: