0% found this document useful (0 votes)
31 views

W6 Assignment

Uploaded by

Nitesh 99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

W6 Assignment

Uploaded by

Nitesh 99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Week 6 Assignment

Nitesh Dulal

Westcliff University

DCS403: Data Structure & Algorithms Design

Prof. Subodh Acharya

April 8, 2023
1

Algorithms in C++

1. Suppose you choose the first element as a pivot in the list {5 2 9 3 8 4 0 1 6 7}. Using

the

partition algorithm in the book, what is the new list after the partition?

Ans: The code is provided below.


2

The function quickSortPartition takes three arguments: the array to be partitioned, the left

pointer, and the right pointer.Inside the function, the pivot element is selected as the first element

of the array (line 4). Two pointers i and j are initialized to the left and right pointers respectively

(line 5). The loop (lines 6-13) continues until i is less than j. The first while loop (lines 7-9)

increments i until it finds an element greater than the pivot. The second while loop (lines 10-12)

decrements j until it finds an element smaller than the pivot. If i is less than j, the elements

pointed by i and j are swapped (line 13). After the loop, the pivot is placed at its correct position

in the array by swapping it with the element pointed by j (line 14). Now, all elements on the left

of the pivot are smaller than it, and all elements on the right are greater than it. Finally, the array

is printed to check the partitioning (lines 16-20).


3
4

2. What is the average-time complexity for merge sort?

Ans: A sorting technique called merge sort divides an array into increasingly smaller

sub-arrays until each sub-array can no longer be divided. After that, it merges these sub-

arrays to create a final array that is sorted.


5

To calculate the time complexity of merge sort, we need to analyze the time taken by each

operation. In step (a), the left and right halves of the array are sorted recursively. Since each half

has 'n/2' elements, the time taken to sort each half is T(n/2). Therefore, the total time taken for

step (a) is 2T(n/2).

In step (b), the two sorted halves are merged. Since each half has 'n/2' elements, the total number

of elements to merge is 'n'. The time taken to merge two arrays of size 'n/2' each is O(n/2 + n/2)

= O(n).Thus, the time complexity of merge sort can be expressed as T(n) = 2T(n/2) + n.

To solve this recurrence relation, we can use the Master Theorem. In our case, a = 2, b = 2, and

f(n) = n. Therefore, k = logb a = log2 2 = 1. Since f(n) = n = Θ(nlog2 2), this falls under the

second case of the Master Theorem. Therefore, the time complexity of merge sort is O(nlogn).

3. The time to merge two sorted lists of size n is what?

Ans: The code is provided below.


6
7

To merge two arrays, we can consider the size of the arrays as 'n' and 'm'. The time

complexity for merging them would be O(n). However, in this case, we have two sorted

arrays of the same size, which is 'n'. Therefore, the time complexity for merging these

two arrays will be O(n + n) = O(2n). We can also think of it in terms of adding 'n'

elements to an array, which takes O(n) time. Adding '2n' elements to an array will take

O(2n) time. So despite of the method, we use, the time complexity for merging two

sorted arrays of the same size 'n' is O(2n).

4. What is the average-time complexity for quick sort?

Ans: Quick Sort's typical time complexity is O(n * log n). Notwithstanding, since it

hinges on the way the array is divided, calculating the actual average time complexity for

Quick Sort is problematic. The array is evenly split in the best situation, and both the left

and right sides have n/2 entries, resulting in a time complexity of O(n * log n). In the

most severe situation, one side of the partition has every element while the other is

vacant, resulting in an O(n2) time complexity. Nevertheless, Quick Sort often takes O(n *

log n) amount of time to complete. Since we lack a specific process to represent the

division of items, we estimate that the array is split by 10% to 90% in the average
8

situation. Where elements in Left side can be n/10 whereas the elements in right side

could be 9n / 10.

The Quick Sort algorithm takes O(n) time complexity for creating each partition. In the case of

recursion, the time taken for the left and right side sorting becomes T(n/10) and T(9n/10),

respectively. For the average case, the total time complexity for Quick Sort can be expressed as
9

T(n) = T(n/10) + T(9n/10) + O(n).

To determine the average time complexity of Quick Sort, we use a recursion tree structure to

analyze its partitioning. Each partition divides the elements into a 10-90 split on each side. The

height of the left side of the tree is log10 n, and the right side is log10/9 n. We take the maximum

height between the two, which is log10/9 n. This means that the recursion tree of Quick Sort for

the average case can have a height of more than log10/9 n. Since the time taken for T(n/10) and

T(9n/10) depends on this height, we can express the time complexity equation as T(n) = 2

T(log10/9 n) + O(n). Using the Master Theorem, we can obtain the average case complexity of

Quick Sort, which is O(n * log n).

5. The worst-time complexity for heap sort is what?

Ans:

Heap Sort is another sorting method for objects in an array that uses the concept

of binary search tree-derived heaps. A heap is a complete binary tree that can be either a

maximum heap or a minimum heap. The sorting process involves creating a max heap,

removing the root element from the heap, and then constructing a heap using the

remaining elements. The removed element is placed outside of the heap. Due to the

properties of the maximum heap, each time this process is repeated, the removed element

becomes sorted, creating an array of sorted elements.


10
11

This code shows an implementation of the Heap Sort algorithm for sorting an array of integers in

ascending order.The heapSort function takes two parameters: the list array that needs to be sorted

and its size. The first step in Heap Sort is to build a max heap from the elements in the list array.

This is done in a for loop, starting from the parent node of the last element in the array (which is

at index size-1). The for loop iterates backwards until the root node is reached (at index 0). For

each iteration, the makeHeap function is called with three parameters: the list array, its size, and

the current index in the for loop (i). The makeHeap function implements the Heapify algorithm

to ensure that the subtree rooted at the current index satisfies the max heap property. After
12

building the max heap, the second step is to sort the array by repeatedly extracting the maximum

element from the heap and putting it in its correct position in the sorted array. This is done in

another for loop that starts from the last element of the list array and iterates backwards until the

first element is reached. For each iteration, the root element of the heap (which is the maximum

element) is swapped with the current element being processed in the for loop. This ensures that

the maximum element is placed in its correct sorted position. Then, the makeHeap function is

called again, but with the size of the heap reduced by one and the root index (0) as the current

index. This effectively removes the last element (which is already sorted) from the heap and

Heapify is performed again to ensure that the max heap property is maintained. This process is

repeated until all elements are sorted and the array is returned in ascending order. The time

complexity of Heap Sort is O(n*log n) in the worst case.

6. If a linked list has only one node, is head == tail true?

Ans: If a linked list has only one node, then the head and tail of the linked list will be the

same node. In other words, the head and tail pointers will both point to the single node in

the linked list. So, in this scenario, head == tail would evaluate to true, since they both

reference the same node. It's important to note that this only holds true when the linked

list has a single node. If there are multiple nodes in the linked list, then head and tail may

not be the same, depending on the implementation. For example, in a singly linked list,

the tail pointer would typically point to the last node in the list, while the head pointer

would point to the first node.


13

7. The gcd (m, n) can also be defined recursively as follows:

● If m % n is 0, gcd (m, n) is n.

● Otherwise, gcd (m, n) is gcd (n, m % n).

Write a recursive function to find the GCD and then write a test program that

computes

gcd (24, 16) and gcd (255, 25).


14

Ans: The code is provided below.

The program starts by asking the user to input two numbers to calculate the gcd.

Then, a function gcd is called which takes the two user input numbers as arguments and

returns a single value as an answer.


15

Example 1: 24, 16.

Function Call  gcd (24, 16)

The function first checks whether the 24 % 16 is 0 or not (line 6). Since it is not,

the program moves to the else block and calls itself (line 8). The value of n (16) and m %

n (8) is passed in the recursion call.

First Recursive Call (Inside Function Call)  gcd (16, 8)

The if condition is satisfied here as 16 % 8 is equal to 0 so the function takes the

return value of n as 8 and goes to the function call.

Function Call  gcd (24, 16)

The returned value of the first recursion call (8) is caught by this function call and

it sends the value to the main function.

Finally, the main program prints


16

the return value as the answer.

Example 2: 255, 25

Function Call  gcd (255, 25)

The function first checks whether the 255 % 25 is 0 or not (line 6). Since it is not,

the program moves to the else block and calls itself (line 8). The value of n (25) and m %

n (5) is passed in the recursion call.

First Recursive Call (Inside Function Call)  gcd (25, 5)

The if condition is satisfied here as 25 % 5 is equal to 0 so the function takes the

return value of n as 5 and goes to the function call.

Function Call  gcd (255, 25)

The returned value of the first recursion call (5) is caught by this function call and

it sends the value to the main function. Finally, the main program prints the return value

as the answer.
17

References

Liang, Y. D. (2022). Introduction to C++ programming and data structures (5th ed.). Pearson.

http://powerunit-ju.com/wp-content/uploads/2020/02/Daniel-Y-Liang-et-al.-Introduction-

to-Programming-with-C-Pearson-2014.pdf

You might also like