ADA Unit II GCR
ADA Unit II GCR
• If n > 1, : Divide the problem into two instances of the same problem:
(i) To compute the sum of the first [n/2] numbers
• Once each of these two sums is computed by applying the same method
recursively, we can add their values to get the sum in question:
General Divide-and-Conquer recurrence
A problem’s instance of size n is divided into two instances of size n/2
f (n) : Function that accounts for the order of growth of its solution T (n)
the time spent on dividing an depends on the values of the constants a
instance of size n into and b and the order of growth of the
instances of size n/b and function f (n).
combining their solutions
Master Theorem
• If
The recurrence for the number of additions A(n) made by the divide-and-
conquer sum-computation algorithm on inputs of size
Merge Sort Algorithm:
Merge sort is a perfect example of a successful application of the divide-and-conquer
technique.
Split array A[1..n] in two and make copies of each half in arrays B[1.. n/2 ] and C[1.. n/2 ]
compare the first elements in the remaining unprocessed portions of the arrays
copy the smaller of the two into A, while incrementing the index indicating the
unprocessed portion of that array
Once all elements in one of the arrays are processed, copy the remaining
unprocessed elements from the other array into A.
Algorithm: Merge (B [0…p-1], C [0...q-1], A [0...p+q-1])
//Merges two sorted arrays into one sorted array.
//Input: Arrays B [0…p-1] and C [0...q-1] both sorted.
Algorithm: MergeSort (A [0...n-1]) //Output: Sorted array A [0...p+q-1] of the elements of B
//This algorithm sorts array A [0...n-1] by recursive mergesort. and C.
//Input: An array A [0...n-1] of orderable elements.
//Output: Array A [0...n-1] sorted in non-decreasing order
{
{
i ← 0; j←0; k←0
if n>1 {
while i<p and j<q do
Copy A [0…└n/2┘-1] to B [0…└n/2┘-1] {
Copy A [└n/2┘… n-1] to C [0…┌n/2┐-1] if B[i] <= C[j]
A[k] ← B[i]; i← i+1
MergeSort (B [0…└n/2┘-1])
else
MergeSort (C [0…┌n/2┐-1]) A[k] ← C[j]; j← j+1
Merge (B, C, A) k ← k+1
} }
if i=p
}
copy C [j…q-1] to A[k…p+q-1]
else
copy B [i…p-1] to A[k…p+q-1]
}
Steps for Merge Sort
• Step 1: Create duplicate copies of sub-arrays to be sorted
No:
Compare current elements of both arrays
Yes:
Copy all remaining elements of non-empty array
Efficiency of Mergesort
Assuming that n is a power of 2,
The recurrence relation for the number of key comparisons C(n) is
the recurrence is
close to the theoretical 𝑚𝑖𝑛𝑖𝑚𝑢𝑚2 that any general comparison-based sorting algorithm
can have.
• For large n, the number of comparisons made by this algorithm in the average case
• Disadvantage of mergesort is the linear amount of extra storage the algorithm requires.
• There are two main ideas leading to several variations of mergesort.
• This avoids the time and space overhead of using a stack to handle
recursive calls.
• Second, we can divide a list to be sorted in more than two parts, sort
each recursively, and then merge them together.
• In a partition the array all the elements to the left of some element
A[s] are less than or equal to A[s],
• and all the elements to the right of A[s] are greater than or equal to
it: (A[s] - Pivot element)
Quicksort
Step 1 − Choose the highest index value has pivot
Step 2 − Take two variables to point left and right of the list excluding pivot S
Step 7 − if both step 5 and step 6 does not match swap left and right
Step 8 − if left ≥ right, the point where they met is new pivot
Quicksort
Step 1 - Consider the first element of the list as pivot
(i.e., Element at first position in the list).
Step 2 - Define two variables i and j.
Set i and j to first and last elements of the list respectively.
Step 3 - Increment i until list[i] > pivot then stop.
Step 4 - Decrement j until list[j] < pivot then stop.
Step 5 - If i < j then exchange list[i] and list[j].
Step 6 - Repeat steps 3,4 & 5 until i > j.
Step 7 - Exchange the pivot element with list[j] element.
Quicksort
• After a partition is achieved, A[s] will be in its final position in the
sorted array, and continue sorting the two subarrays to the left and
to the right of A[s] independently
• In quick sort , the entire work happens in the division stage, with no
work required to combine the solutions to the subproblems
ALGORITHM Quicksort
ALGORITHM HoarePartition(A[l..r])
Quicksort
• Partition A[0..n − 1]
• Scan the subarray from both ends, comparing the subarray’s elements to
the pivot.
• The left-to-right scan, denoted below by index pointer i, starts with the
second element.
• Since we want elements smaller than the pivot to be in the left part of the
subarray, this scan skips over elements that are smaller than the pivot and
stops upon encountering the first element greater than or equal to the pivot.
• The right-to-left scan, denoted below by index pointer j, starts with the last
element of the subarray.
• Since we want elements larger than the pivot to be in the right part of the
subarray, this scan skips over elements that are larger than the pivot and
stops on encountering the first element smaller than or equal to the pivot.
Quicksort
Finally, if the scanning indices stop while pointing to the same element,
i.e., i = j, the value they are pointing to must be equal to p (why?).
Thus, we have the subarray partitioned, with the split position s = i = j :
We can combine the last case with the case of crossed-over indices
(i > j ) by exchanging the pivot with A[j ] whenever i ≥ j .
Quicksort
• In the worst case, all the splits will be skewed to the extreme: one of the two
subarrays will be empty, and the size of the other will be just 1 less than the
pivot, the left-to-right scan will stop on A[1] while the right-to-left scan will go
rightmost, and the middle element of the array switching to insertion sort on
systems) or not sorting small subarrays at all and finishing the algorithm with
insertion sort applied to the entire nearly sorted array modifications of the
divide-and-conquer technique.
a binary tree.
root to a leaf.
• For the empty tree, the comparison T = ∅ is executed once but there
are no additions, and for a single-node tree, the comparison and
addition numbers are 3 and 1, respectively.
• The extra nodes are called external; the original nodes (shown by
little circles) are called internal.
i.e., by visiting the tree’s root and its left and right subtrees.
subtrees.
• (If one of the numbers has fewer digits than the other, we can pad the
shorter number with leading zeros to equalize their lengths.)
a = 𝑎1 𝑎0 and b = 𝑏1 𝑏0 ,
c = a ∗ b = 𝑐2 102 + 𝑐1 101 + 𝑐0
technique.
a = 𝒂𝟏 𝟏𝟎𝒏/𝟐 + 𝒂𝟎 and
b = 𝒃𝟏 𝒃𝟎 implies that
b = 𝒃𝟏 𝟏𝟎𝒏/𝟐 + 𝒃𝟎 .
• Therefore, taking advantage of the same trick we used for two-digit numbers,
we get
• 𝒂 = 𝒂𝟏 𝒂𝟎 and b = 𝒃𝟏 𝒃𝟎 ,
c= ɑ*b= ( 𝑎1 10𝑛/2 + 𝑎0 ) * (𝑏1 10𝑛/2 + 𝑏0 )
= (𝑎1 ∗ 𝑏1 )10𝑛 + (𝑎1 *𝑏0 + 𝑎0 * 𝑏1 ) 10𝑛/2 + (𝑎0 *𝑏0 )
= 𝑐2 10𝑛 + 𝑐1 10𝑛/2 + 𝑐0
is the product of the sum of the a’s haves and the sum of the b’s
haves minus the sum of 𝑐2 and 𝑐0 .
• If n/2 is even, we can apply the same method for computing the products
c2, c0, and c1.