0% found this document useful (0 votes)
5 views

Algo 2

Uploaded by

csindirareddy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Algo 2

Uploaded by

csindirareddy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Chapter 2

Sorting Algorithms

LEARNING OBJECTIVES

 Sorting algorithms  Binary search trees


 Merge sort  Heap sort
 Bubble sort  Sorting–performing delete max operations
 Insertion sort  Max-heap property
 Selection sort  Min-heap property
 Selection sort algorithm  Priority queues

SortInG alGorIthMS digit is in the set {1, 2,… k}, then radix sort can sort the num-
bers in q (d (n + k)) time. Where ‘d’ is constant. Radix sort runs
Purpose of sorting in linear time.
Sorting is a technique which reduces problem complexity and • Bucket sort, requires knowledge of the probabilistic distribution
search complexity. of numbers in the input array.
• Insertion sort takes q (n2) time in the worst case. It is a fast
inplace sorting algorithm for small input sizes.
• Merge sort has a better asymptotic running time q (n log n), but
it does not operate in inplace. MerGe Sort
• Heap sort, sorts ‘n’ numbers inplace in q (n log n) time, it uses a Suppose that our division of the problem yields ‘a’ sub problems,
data structure called heap, with which we can also implement a
1
priority queue. each of which is   th size of the original problem. For merge
• Quick sort also sorts ‘n’ numbers in place, but its worst – case b
running time is q (n2). Its average case is q (n log n). The con- sort, both a and b are 2, but sometimes a ≠ b. If we take D(n) time
stant factor in quick sort’s running time is small, This algorithm to divide the problem into sub problems and C(n) time to combine
performs better for large input arrays. the solutions of the sub problems into the solution to the original
• Insertion sort, merge sort, heap sort, and quick sort are all com- problem. The recurrence relation for merge sort is
parison based sorts; they determine the sorted order of an inpu-
θ (1) if n ≤ c,
tarray by comparing elements. T ( n) = 
• We can beat the lower bound of W (n log n) if we can gather aT ( n /b) + D( n) + C ( n) otherwise
information about the sorted order of the input by means other
than comparing elements. Running time is broken down as follows:
• The counting sort algorithm, assumes that the input numbers are Divide: This step computes the middle of the sub array, which
in the set {1, 2, …. k}. By using array indexing as a tool for takes constant time q (1).
determining relative order, counting sort can sort n numbers in
q (k + n) time. Thus counting sort runs in time that is linear in Conquer: We solve 2 sub problems of size (n/2) each recursively
size of the input array. which takes 2T(n/2) time.
• Radix sort can be used to extend the range of counting sort. If Combine: Merge sort procedure on an n-element sub array takes
there are ‘n’ integers to sort, each integer has ‘d’ digits, and each time q (n).
Chapter 2 • Sorting Algorithms | 3.99

•• Worst case running time T(n) of merge sort The array is already sorted, but our algorithm does not
know if it is completed. The algorithm needs one whole
0(1) if n ≤1 pass without any swap to know it is sorted.
T ( n) = 
 aT ( n / 2) + θ ( n) if n > 1 Third pass:
cn
(1 2 4 5 8) → (1 2 4 5 8)
cn
(1 2 4 5 8) → (1 2 4 5 8)
cn cn
cn (1 2 4 5 8) → (1 2 4 5 8)
2 2
(1 2 4 5 8) → (1 2 4 5 8)
Finally the array is sorted, and the algorithm can terminate.
log n cn cn cn cn cn
4 4 4 4 Algorithm
void bubblesort (int a [ ], int n)
{
int i, j, temp;
c c c c c c c c cn
for (i=0; i < n-1; i++)
{
Total: cn log n + cn    for (j=0; j < n – 1 – i; j++)
Figure 1 Recurrence tree
   if (a [j] > a [j + 1])
  {
The top level has total cost ‘cn’, the next level has total cost     temp = a [j + 1];
c(n/2) + c(n/2) = cn and the next level has total cost c(n/4)     a [j + 1] = a [j];
+ c(n/4) + c(n/4) + c(n/4) = cn and so on. The ith level has     a [j] = temp;
total cost 2i c (n/2i) = cn. At the bottom level, there are ‘n’   }
nodes, each contributing a cost of c, for a total cost of ‘cn’. }
The total number of levels of the ‘recursion tree’ is log n + 1. }
There are log n + 1 levels, each costing cn, for a total cost
of cn (log n + 1) = cn log n + cn ignoring the low–order term Insertion Sort
and the constant c, gives the desired result of q (n log n). Insertion sort is a comparison sort in which the sorted array is
built one entry at a time. It is much less efficient on large lists
than more advanced algorithms such a quick sort, heap sort,
Bubble Sort (or) merge sort. Insertion sort provides several advantages.
Bubble sort is a simple sorting algorithm that works by
•• Efficient for small data sets.
repeatedly stepping through the list to be sorted, compar-
•• Adaptive, i.e., efficient for data set that are already sub-
ing each pair of adjacent items, and swapping them if they
stantially sorted. The complexity is O(n + d), where d is
are in the wrong order. The pass through the list is repeated
the number of inversions.
until no swaps are needed, which indicates that the list is
•• More efficient in practice than most other simple quad-
sorted. The algorithm gets its name from the way smaller
ratic, i.e., O(n2) algorithms such as selection sort (or)
elements ‘bubble’ to the top of the list.
bubble sort, the best case is O(n).
Example: Take the array of numbers ‘5 1 4 2 8’and sort the •• Stable, i.e., does not change the relative order of elements
array from lowest number to greatest number using bubble with equal keys.
sort algorithm. In each step, elements underlined are being •• In-place i.e., only requires a constant amount O(1) of
compared. additional memory space.
First pass: •• Online, i.e., can sort a list as it receives it.
(5 1 4 2 8) → (1 5 4 2 8), here algorithm Algorithm
compares the first 2 elements and swaps them
Insertion sort (A)
(1 5 4 2 8) → (1 4 5 2 8), swap (5 > 4)
For (j ← 2) to length [A]
(1 4 5 2 8) → (1 4 2 5 8), swap (5 > 2)
Do key ← A [j]
(1 4 2 5 8) → (1 4 2 5 8), since these ele-
i ←j – 1;
ments are already in order, algorithm does not swap them.
While i > 0 and A [i] > key
Second pass: {
(1 4 2 5 8) → (1 4 2 5 8) Do A [i + 1] ← A [i]
(1 4 2 5 8) → (1 2 4 5 8), swap since (4 > 2) i ← i - 1
(1 2 4 5 8) → (1 2 4 5 8) }
(1 2 4 5 8) → (1 2 4 5 8) A [i + 1] ← key
3.100 | Unit 3 • Algorithms

Every repetition of insertion sort removes an element Analysis


from the input data, inserting it into the correct position Selection sort is not difficult to analyze compared to other
in the already sorted list, until no input element remains. sorting algorithms, since none of the loops depend on the
Sorting is typically done in–place. The resulting array after data in the array selecting the lowest element requires scan-
K iterations has the property where the first k + 1 entries are ning all n elements (this takes n – 1 comparisons) and then
sorted. In each iteration the first remaining entry of the input swapping it into the first position. Finding the next lowest
is removed, inserted into the result at the correct position, element requires scanning the remaining n – 1 elements and
with each element greater than X copied to the right as it is so on, for (n – 1) + (n – 2) + … + 2 + 1 = n(n – 1)/2 ∈ q(n2)
compared against. X. comparisons.
Each of these scans requires one swap for n – 1 elements
Performance (the final element is already in place).
•• The best case input is an array that is already sorted. In this
case insertion sort has a linear running time (i.e., q (n)). Selection sort Algorithm
•• The worst case input is an array sorted in reverse order. First, the minimum value in the list is found. Then, the first
In this case every iteration of the inner loop will scan element (with an index of 0) is swapped with this value.
and shift the entire sorted subsection of the array before Lastly, the steps mentioned are repeated for rest of the array
inserting the next element. For this case insertion sort has (starting at the 2nd position).
a quadratic running time (O(n2)).
Example 1: Here’s a step by step example to illustrate the
•• The average case is also quadratic, which makes insertion
selection sort algorithm using numbers.
sort impractical for sorting large arrays, however, inser-
tion sort is one of the fastest algorithms for sorting very Original array: 6 3 5 4 9 2 7
small arrays even faster than quick sort. 1st pass → 2 3 5 4 9 6 7 (2 and 6 were swapped)
2nd pass → 2 3 5 4 9 6 7 (no swap)
Example: Following figure shows the operation of
3rd pass → 2 3 4 5 9 6 7 (4 and 5 were swapped)
insertion sort on the array A = (5, 2, 4, 6, 1, 3). Each part
4th pass → 2 3 4 5 6 9 7 (6 and 9 were swapped)
shows what happens for a particular iteration with the value
5th pass → 2 3 4 5 6 7 9 (7 and 9 were swapped)
of j indicated. j indexes the ‘Current card’ being inserted.
6th pass → 2 3 4 5 6 7 9 (no swap)
j j Note: There are 7 keys in the list and thus 6 passes were
5 2 4 6 1 3 2 5 4 6 1 3 required. However, only 4 swaps took place.
Example 2: Original array: LU, KU, HU, LO, SU, PU
1st pass → HU, KU, LU, LO, SU, PU
j j 2nd pass → HU, KU, LU, LO, SU, PU
2 4 5 6 1 3 2 4 5 6 1 3 3rd pass → HU, KU, LO, LU, SU, PU
4th pass → HU, KU, LO, LU, SU, PU
5th pass → HU, KU, LO, LU, PU, SU
j Note: There were 6 elements in the list and thus 5 passes
1 2 4 5 6 3 1 2 3 4 5 6 were required. However, only 3 swaps took place.

Read the figure row by row. Elements to the left of A[  j] that Binary Search Trees
are greater than A[  j] move one position to the right and A[  j] Search trees are data structures that support many
moves into the evacuated position. dynamic, set operations, including SEARCH, MINIMUM,
MAXIMUM, PREDECESSOR, SUCCESSOR, INSERT
Selection Sort and DELETE. A search tree can be used as a dictionary and
as a priority Queue. Operations on a binary search tree take
Selection sort is a sorting algorithm, specifically an
time proportional to the height of the tree. For a complete
in-place comparison sort. It has O(n2) complexity, making it
binary tree with ‘n’ nodes, basic operations run in q(log n)
inefficient on large lists.
worst-case time. If the tree is a linear chain of ‘n’ nodes, the
The algorithm works as follows:
basic operations take q(n) worst-case time.
1. Find the minimum value in the list. A binary search tree is organized, in a binary tree such a
2. Swap it with the value in the first position. tree can be represented by a linked data structure in which
3. Repeat the steps above for the remainder of the list each node is an object. In addition to key field, each node
(starting at the second position and advancing each contains fields left, right and P that point to the nodes cor-
time). responding to its left child, its right child, and its parent,
Chapter 2 • Sorting Algorithms | 3.101

respectively. If the child (or) parent is missing, the appropri- the partially sorted array. After removing the largest item, it
ate field contains the value NIL. The root node is the only reconstructs heap, removes the largest remaining item, and
node in the tree whose parent field is NIL. places, it in the next open position from the end of the par-
tially sorted array. This is repeated until there are no items
Binary search tree property left in the heap and the sorted array is full. Elementary
The keys in a binary search tree are always stored in such a implementations require two arrays one to hold the heap
way as to satisfy the binary search tree property. and the other to hold the sorted elements.
Let ‘a’ be a node in a binary search tree. If ‘b’ is a node •• Heap sort inserts the input list elements into a binary
in the left sub tree of ‘a’, key [b] ≤ key [a] heap data structure. The largest value (in a max-heap) or
If ‘b’ is a node in the right sub tree of ‘a’ then key [a] ≤ key the smallest value (in a min-heap) is extracted until none
[b]. remain, the value having been extracted in sorted order.
8 Example: Given an array of 6 elements: 15, 19, 10, 7, 17,
10 16, sort them in ascending order using heap sort.
7
14 Steps:
6 9

Figure 2 Binary search tree. 1. Consider the values of the elements as priorities and
build the heap tree.
The binary search tree property allows us to print out all 2. Start delete Max operations, storing each deleted
keys in a binary search tree in sorted order by a simple element at the end of the heap array.
recursive algorithm called an inorder tree.
If we want the elements to be sorted in ascending order, we
Algorithm need to build the heap tree in descending order-the greatest
element will have the highest priority.
INORDER-TREE-WALK (root [T ])
INORDER-TREE-WALK (a) 1. Note that we use only array, treating its parts
differently,
1. If a ≠ NIL 2. When building the heap-tree, part of the array will be
2. Then INORDER-TREE-WALK (left [a]) considered as the heap, and the rest part-the original
3. Print key [a] array.
4. INORDER-TREE-WALK (right [a]) 3. When sorting, part of the array will be the heap and
It takes q(n) time to walk an n-node binary search tree, since the rest part-the sorted array.
after the initial call, the procedure is called recursively twice Here is the array: 15, 19, 10, 7, 17, 6.
for each node in the tree.
Let T(n) denote the time taken by IN-ORDER-TREE-
WALK, when it is called on the root of an n-node subtree. Building the Heap Tree
INORDER-TREE-WALK takes a small, constant The array represented as a tree, which is complete but not
amount of time on an empty sub-tree (for the test x ≠ NIL). ordered.
So T(1) = C for some positive constant C.
For n > 0, suppose that INORDER-TREE-WALK is 15 19 10 7 17 16
15
called on a node ‘a’ whose left subtree has k nodes and
whose right subtree has n – k – 1 nodes. 19 10
The time to perform in order traversal is
T(n) = T(k) + T(n – k – 1) + d.
7 17 16
For some positive constant ‘d’ that reflects the time to
execute in-order (a), exclusive of the time spent in recursive
Start with the right most node at height 1 – the node at posi-
calls T(n) = (c + d) n + c.
tion 3 = size/2. It has one greater child and has to be perco-
For n = 0, we have (c + d) 0 + c = T(0),
lated down.
For n > 0,
  T(n) = T(k) + T(n – k – 1) + d
15
= ((c + d)(k + c) + ((c + d) (n – k – 1) + c) + d 15 19 10 7 17 16
= (c + d) n + c – (c + d) + c + d = (c + d)n + c
19 10
Heap Sort
Heap sort begins by building a heap out of the data set, and 7 17 16
then removing the largest item and placing it at the end of
3.102 | Unit 3 • Algorithms

After processing array [3] the situation is: 17 16 7 15 19


15
15 19 16 7 17 10 10

19 16 Percolate down the hole


10
7 17 17 16 7 15 19 17
10

Next comes array [2]. Its children are smaller, so no perco- 10 16


lation is needed.
The last node to be processed is array[1]. Its left 7 15
child is the greater of the children. The item at array
Percolate once more (10 is less than 15, so it cannot be
[1] has to be percolated down to the left, swapped with
inserted in the previous hole)
array [2].
17
15 17 15 16 7 19
15 19 16 7 17 10
15 16
19 16 10

7
7 17 10
Now 10 can be inserted in the hole
As a result:
17
17 15 16 7 10 19
19
19 15 16 7 17 10 15 16
15 16
7 10
7 17 10
Repeat the step B till the array is sorted.
The children of array [2] are greater and item 15 has to be
moved down further, swapped with array [5]. Heap sort analysis
Heap sort uses a data structure called (binary) heap binary,
19 heap is viewed as a complete binary tree. An Array A that
19 17 16 7 15 10
represents a heap is an object with 2 attributes: length [A],
17 16 which is the number of elements in the array and heap size
[A], the number of elements in the heap stored within array A.
7 15
No element past A [heap size [A]], where heap size [A] ≤
10
length [A], is an element of the heap.
There are 2 kinds of binary heaps:
Now the tree is ordered, and the binary heap is built.
1. Max-heaps
Sorting-performing Delete 2. Min-heaps
Max Operations In both kinds the values in the nodes satisfy a heap-property.
Delete the top element Max-heap property A[PARENT (i)] ≥A[i]
Store 19 in a temporary place, a hole is created at the top. The value of a node is almost the value of its parent. Thus the
largest element in a max-heap is stored at the root, and the
sub tree rooted at a node contains values no larger than that
17 16 7 15 10
contained at the node itself.
17 16
19 Min-heap property For every mode ‘i’ other than the root
[PARENT (i)] ≤ A[i]. The smallest element in a min-heap
7 15 10 is at the root.
Max-heaps are used in heap sort algorithm.
Swap 19 with the last element of the heap. As 10 will be Min-heaps are commonly used in priority queues.
adjusted in the heap, its cell will no longer be a part of the
heap. Instead it becomes a cell from the sorted array
Chapter 2 • Sorting Algorithms | 3.103

Basic operations on heaps run in time almost propor- Priority Queues


tional to the height of the tree and thus take O(log n) The most popular application of a heap is its use as an effi-
time cient priority queue.
•• MAX-HEAPIFY procedure, runs in O(log n) time. A priority queue is a data structure for maintaining a set
•• BUILD-MAX-HEAP procedure, runs in linear time. S of elements, each with an associated value called a key. A
•• HEAP SORT procedure, runs in O(n log n) time, sorts an max-priority queue supports the following operations:
array in place. INSERT: INSERT (s, x) inserts the element x into the set S.
•• MAX-HEAP-INSERT This operation can be written as S ← S U {x}.
HEAP- EXTRACT-MAX MAXIMUM: MAXIMUM (S) returns the element of S
HEAP-INCREASE-KEY with the largest key
HEAP-MAXIMUM
EXTRACT-MAX: EXTRACT-MAX(S) removes and returns
All these procedures, run in O(log n) time, allow the heap
the element of S with the largest key.
data structure to be used as a priority queue.
•• Each call to MAX-HEAPIFY costs O(log n) time, and there INCREASE-KEY: INCREASE-KEY(s, x, k) increases
are O(n) such calls. Thus, the running time is O(n log n) the value of element x’s key to the new value k, which is
•• The HEAPSORT procedure takes time O(n log n), since assumed to be atleast as large as x’s current key value.
the call to BUILD-MAX-HEAP takes time O(n) and each One application of max–priority queue is to schedule
of the (n - 1) calls to MAX-HEAPIFY takes time O(log n). jobs on a shared computer.

Exercises

Practice Problems 1 i←j–1


Directions for questions 1 to 15: Select the correct alterna- While i > 0 and A [ i ] > key
tive from the given choices. do A [i + i] ← A [ i ]
1. Solve the recurrence relation T(n) = 2T(n/2) + k.n i←i–1
where k is constant then T(n) is A [i + 1] ← key
(A) O(log n) (B) O(n log n) (A) Selection sort (B) Insertion sort
(C) O(n) (D) O(n2) (C) Quick sort (D) Merge sort
2. What is the time complexity of the given code? 5. What is the order of elements after 2 iterations of the
Void f(int n) above-mentioned sort on given elements?
{
if (n > 0) 8 2 4 9 3 6
f (n/2);
} (A) 2 4 9 8 3 6
(A) θ(log n) (B) θ(n log n)
(C) θ(n2) (D) θ(n)
(B) 2 4 8 9 3 6
3. The running time of an algorithm is represented by the
following recurrence relation;
(C) 2 4 6 3 8 9
n n≤3
T ( n) =   n  (D) 2 4 6 3 8 9
T  3  + cn otherwise
  
Common data for questions 6 and 7:
What is the time complexity of the algorithm?
6. The following pseudo code does which sort?
(A) θ(n) (B) θ(n log n)
(C) θ(n2) (D) θ(n2 log n) 1. If n = 1 done
2. Recursively sort
Common data for questions 4 and 5:
  A [1…[n/2] ]and
4. The following pseudo code does which sorting?   A [[n/2] + 1 … n]
xsort [A, n] 3. Combine 2 ordered lists
for j ← 2 to n (A) Insertion sort (B) Selection sort
do key ← A [ i ] (C) Merge sort (D) Quick sort
3.104 | Unit 3 • Algorithms

7. What is the complexity of the above pseudo code? sorted part of the array. If binary search is used instead
(A) θ(log n) (B) θ(n2) of linear search to identify the position, the worst case
(C) θ(n log n) (D) θ(2n) running time would be.
8. Apply Quick sort on a given sequence 6 10 13 5 8 3 2 (A) θ (n log n)
11. What is the sequence after first phase, pivot is first (B) θ (n2)
element? (C) θ (n(log n)2)
(A) 5 3 2 6 10 8 13 11 (D) θ (n)
(B) 5 2 3 6 8 13 10 11 13. Consider the process of inserting an element into a
(C) 6 5 13 10 8 3 2 11 max heap where the max heap is represented by an
(D) 6 5 3 2 8 13 10 11 array, suppose we perform a binary search on the path
9. Selection sort is applied on a given sequence: from the new leaf to the root to find the position for
the newly inserted element, the number of comparisons
89, 45, 68, 90, 29, 34, 17. What is the sequence after 2
performed is:
iterations?
(A) θ (log n) (B) θ (log log n)
(A) 17, 29, 68, 90, 45, 34, 89
(C) θ (n) (D) θ (n log n)
(B) 17, 45, 68, 90, 29, 34, 89
(C) 17, 68, 45, 90, 34, 29, 89 14. Consider the following algorithm for searching a given
(D) 17, 29, 68, 90, 34, 45, 89 number ‘X’ in an unsorted array A[1 … n] having ‘n’
 n  distinct values:
10. Suppose there are log n sorted lists of   elements
 log n  (1) Choose an ‘i’ uniformly at random from 1 … n
each. The time complexity of producing sorted lists of (2) If A [i ] = x
all these elements is: (hint: use a heap data structure)
Then stop
(A) θ(n log log n) (B) θ(n log n)
(C) W(n log n) (D) W(n3/2) else
11. If Divide and conquer methodology is applied on power- goto(1);
ing a Number Xn. Which one the following is correct? Assuming that X is present in A, what is the expected
(A) Xn = Xn/2 ⋅ Xn/2 number of comparisons made by the algorithm before
n −1 n −1 it terminates.
(B) X n = X 2
⋅X 2
. X (A) n (B) n – 1
n +1 n (C) 2n (D) n/2
(C) X n = X ⋅ X 2 2
15. The recurrence equation for the number of additions
(D) Both (A) and (B)
A(n) made by the divide and conquer algorithm on
12. The usual θ(n2) implementation of insertion sort to input size n = 2K is
sort an array uses linear search to identify the posi- (A) A(n) = 2A(n/2)+ 1 (B) A(n) = 2A(n/2) + n2
tion, where an element is to be inserted into the already (C) A(n) = 2A(n/4) + n2 (D) A(n) = 2A(n/8) + n2

Practice Problems 2 (a) A result of cutting a problem size by a constant


Directions for questions 1 to 15: Select the correct alterna- factor on each iteration of the algorithm.
tive from the given choices. (b) Algorithm that scans a list of size ‘n’.
(c) Many divide and conquer algorithms fall in this
1.
category.
Linear Search Binary search
Input Array W(n) W(n) (d)  Typically characterizes efficiency of algorithm
128 128 8
with two embedded loops.
elements (A) i – b, ii – c, iii – a, iv – d
1024 1024 x (B) i – a, ii – b, iii – c, iv – d
elements (C) i – c, ii – d, iii – a, iv – b
Find x value? (D) i – d, ii – a, iii – b, iv – c
(A) 10 (B) 11 3. Insertion sort analysis in worst case
(C) 12 (D) 13 (A) θ (n)
2. Choose the correct one (B) θ (n2)
(i) log n (ii) n (C) θ (n log n)
(iii) n log n (iv) n2 (D) θ (2n)
Chapter 2 • Sorting Algorithms | 3.105

4. From the recurrence relation. Of merge sort 10. Which one of the following in-place sorting algorithm
T(n) = 2T (n/2) + θ(n). needs the minimum number of swaps?
Which option is correct? (A) Quick sort (B) Insertion sort
I. n/2 II. 2T III. θ (n) (C) Selection sort (D) Heap sort
(a) Extra work (divide and conquer) 11. As the size of the array grows what is the time com-
(b) Sub-problem size plexity of finding an element using binary search (array
of elements are ordered)?
(c) Number of sub-problems (A) θ(n log n) (B) θ(log n)
(A) III – b, II – a, I – c (B) I – b, II – c, III – a (C) θ(n2) (D) θ(n)
(C) I – a, II – c, III – b (D) I – c, II – a, III – b
12. The time complexity of heap sort algorithm is
5. What is the number of swaps required to sort ‘n’ ele- (A) n log n (B) log n
ments using selection sort, in the worst case? (C) n2 (D) None of these.
(A) θ(n) (B) θ(n2)
13. As part of maintenance work, you are entrusted with
(C) θ(n log n) (D) θ(n2 log n)
the work of rearranging the library books in a shelf in a
6. In a binary max heap containing ‘n’ numbers, the proper order, at the end of each day. The ideal choices
smallest element can be found in time will be_____.
(A) O(n) (B) O(log n) (A) Heap sort (B) Quick sort
(C) O(log log n) (D) O(1) (C) Selection sort (D) Insertion sort
7. What is the worst case complexity of sorting ‘n’ num- 14. The value for which you are searching is called
bers using quick sort? (A) Binary value
(A) θ(n) (B) θ(n log n) (B) Search argument
(C) θ(n2) (D) θ(n !) (C) Key
8. The best case analysis of quick sort is, if partition splits (D) Serial value
the array of size n into 15. To sort many large objects and structures it would be
(A) n/2 : n/m (B) n/2 : n/2 most efficient to _____.
(C) n/3 : n/2 (D) n/4 : n/2 (A) Place them in an array and sort the array
9. What is the time complexity of powering a number, by (B) Place the pointers on them in an array and sort the
using divide and conquer methodology? array
(A) θ (n2) (B) θ (n) (C) Place them in a linked list and sort the linked list
(C) θ(log n) (D) θ(n log n) (D) None of the above

Previous Years’ Questions


1. What is the number of swaps required to sort n ele- 4. The minimum number of comparisons required to
ments using selection sort, in the worst case? [2009] find the minimum and the maximum of 100 numbers
(A) q(n) is ––––––.[2014]
(B) q(n log n) 5. Suppose P, Q, R, S, T are sorted sequences having
(C) q(n2) lengths 20, 24, 30, 35, 50 respectively. They are to be
merged into a single sequence by merging together
(D) q(n2 log n)
two sequences at a time. The number of comparisons
2. Which one of the following is the tightest upper that will be needed in the worst case by the optimal
bound that represents the number of swaps required algorithm for doing this is –––––.[2014]
to sort n numbers using selection sort? [2013]
6. You have an array of n elements. Suppose you imple-
(A) O(log n) (B) O(n)
ment quick sort by always choosing the central
(C) O(n log n) (D) O(n2)
element of the array as the pivot. Then the tight-
3. Let P be a quick sort program to sort numbers in est upper bound for the worst case performance is
ascending order using the first element as the pivot.  [2014]
Let t1 and t2 be the number of comparisons made by P (A) O(n2) (B) O(n log n)
for the inputs [1 2 3 4 5] and [4 1 5 3 2] respectively. (C) θ(n log n) (D) O(n3)
Which one of the following holds? [2014]
7. What are the worst-case complexities of insertion and
(A) t1 = 5 (B) t1 < t2
deletion of a key in a binary search tree? [2015]
(C) t1 > t2 (D) t1 = t2
3.106 | Unit 3 • Algorithms

(A) θ(log n) for both insertion and deletion (A) O(1) (B) O(d) but not O(1)
(B) θ(n) for both insertion and deletion (C) O(2d) but not O(d) (D) O(d2d) but not O(2d)
(C) θ(n) for insertion and θ(log n) for deletion 10. Assume that the algorithms considered here sort the
(D) θ(log n) for insertion and θ(n) for deletion input sequences in ascending order. If the input is
8. The worst case running times of Insertion sort, Merge already in ascending order, which of the following are
sort and Quick sort, respectively, are: [2016] TRUE? [2016]
(A) Θ(n log n), Θ(n log n),and Θ(n2) I. Quicksort runs in Q (n2) time
(B) Θ(n2), Θ(n2),and Θ(n log n) II. Bubblesort runs in Q (n2) time
(C) Θ(n2), Θ(n log n),and Θ(n log n) III. Mergesort runs in Q (n) time
(D) Θ(n2), Θ(n log n),and Θ(n2) IV. Insertion sort runs in Q (n) time
9. An operator delete(i) for a binary heap data structure (A) I and II only (B) I and III only
is to be designed to delete the item in the i-th node. (C) II and IV only (D) I and IV only
Assume that the heap is implemented in an array and i
11. A complete binary min - heap is made by including
refers to the i-th index of the array. If the heap tree has
each integer in [1,1023] exactly once. The depth of
depth d (number of edges on the path from the root
a node in the heap is the length of the path from the
to the farthest leaf), then what is the time complexity
root of the heap to that node. Thus, the root is depth 0.
to re-fix the heap efficiently after the removal of the
The maximum depth at which integer 9 can appear is
element? [2016]
_____ . [2016]

Answer Keys

Exercises
Practice Problems 1
1. B 2. A 3. A 4. B 5. B 6. C 7. C 8. B 9. A 10. B
11. D 12. A 13. A 14. B 15. A

Practice Problems 2
1. B 2. B 3. B 4. B 5. A 6. A 7. C 8. B 9. C 10. C
11. B 12. A 13. D 14. C 15. B

Previous Years’ Questions


1. A 2. B 3. C 4. 148 5. 358 6. A 7. B 8. D 9. B 10. D
11. 8

You might also like