0% found this document useful (0 votes)
26 views13 pages

Daa C2

The document discusses various divide and conquer algorithms including binary search, maximum contiguous subarray problem, merge sort, and quick sort. It provides pseudocode for the algorithms and analyzes their time complexities, with merge sort having overall time complexity of O(n log n) and quick sort having worst case time complexity of O(n^2). The document also discusses the general divide and conquer approach of dividing problems into subproblems, conquering the subproblems recursively, and combining the solutions.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views13 pages

Daa C2

The document discusses various divide and conquer algorithms including binary search, maximum contiguous subarray problem, merge sort, and quick sort. It provides pseudocode for the algorithms and analyzes their time complexities, with merge sort having overall time complexity of O(n log n) and quick sort having worst case time complexity of O(n^2). The document also discusses the general divide and conquer approach of dividing problems into subproblems, conquering the subproblems recursively, and combining the solutions.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 13

1

Design and Analysis of


Algorithms
CHAPTER TWO: DIVIDE AND CONQUER
DAC 2
 Many useful algorithms are recursive in structure to solve a given problem, they call
themselves recursively one or more times to deal with closely related subproblems.
 These algorithms typically follow a divide and conquer approach:
 Divide: break the problem into several subproblems that are similar to the original
problem but smaller in size (input-size),
 Conquer: solve the subproblems recursively. If the subproblem sizes are small enough,
however, just solve the subproblems in a straightforward manner (apply brute-force).
 Combine: and then combine the solutions to create a solution to the original problem.
Cont… 3
 When the subproblems are large enough for solving them recursively, we call that
the recursive case.
 Once the subproblems become small enough that we no longer recurse, we say that
the recursion “bottoms out” and that we have gotten down to the base case.
 Sometimes, in addition to subproblems that are smaller instances of the same
problem, we have to solve subproblems that are not quite the same as the original
problem and we consider solving such subproblems as part of the combine step.
Abstraction of DAC 4
 The general method for DAC designing strategy can be expressed using a control abstraction as follow:
Algorithm DAC(P)
{
if(small(P)){
return (P)
}
else {
Divide P into smaller instances p1,p2,..pk.
Apply DAC to each of these sub problems.
return Combine(DAC(p1),DAC(p2),..DAC(pk))
}
}
 Now characterize the time complexity for the DAC at abstraction level?
Binary search 5
 Binary search algorithm do not follow a complete divide and conquer strategy, since
it doesn’t involve combining.
 But, I found binary search as the simplest algorithm for starting the class about
divide and conquer strategy.
 The first thing to recall in understanding binary search is that, it only works for
sorted lists, then it follows the following step:
1. Locate midpoint of array to search
2. Determine if target is in lower half or upper half of an array.
 If in lower half, make this half the array to search
 If in the upper half, make this half the array to search
3. Loop back to step 1 until the size of the array to search is one, and this element does
not match, in which case return –1.
Binary search 6
 All divide and conquer algorithms can be implemented in recursive manner but all
recursively implemented algorithms may not be divide and conquer.
 Example: we can have a recursive version of binary search as

int fact(int n){ binarySearch(int arr[], int left, int right, int elem){
1. int mid=(left+right)/2;
1. if(n<=1){ 2. if(left>right){
2. return 1; 3. return -1;
3. }else{ 4. }else{
5. if(elem==arr[mid]){
4. return (n*fact(n-1)); 6. return mid;
5. } 7. }else if(elem<arr[mid]){
8. return (binarySearch(arr, left, mid-1,elem));
6. } 9. }else{
10. return (binarySearch(arr, mid+1, right,
elem));
11. }
12. }
 but we discussed why binary search could not be a pure divide and conquer. Also
13. }

the above recursive version of factorial is not DAC.


Max Contiguous Subarray 7
 In maximum contiguous subarray problem, the goal is to find the maximum sum from the
list of contiguous elements of the array.
 In some cases we may asked to provide the starting and ending indexes of the array which
makes the maximum sum out of the contiguous elements of the array.
 Maximum contiguous subarray can be solved in brute-force manner in O(n 2) time but using
a closely DAC strategy the problem can be solved in O(nlog n) time.
 How can we deal with the maximum(or minimum also) sub array problem in divide and
conquer algorithm design strategy?
Cont… 8

Recursive Max contiguous subarray Find max cross sum between subarrays
FMS(A; low; high) FMCS(A; low; mid; high)
1 if high == low 1 left-sum = -∞
2 return (A[low]) // base case: 2 sum = 0
3 else mid = (low + high)/2 3 for i = mid down to low
4 M1=FMS(A; low; mid) 4 sum = sum + A[i]
5 M2=FMS(A; mid + 1; high) 5 if sum > left-sum
6 M3=FMCS(A; low; mid; high) 6 left-sum = sum
7 return (MAX(M1, M2, M3)) 7 right-sum = -∞
8 sum = 0
9 for j = mid + 1 to high
10 sum = sum + A[j]
11 if sum > right-sum
12 right-sum = sum
 Characterize the recurrence function 13 return (MAX(left-sum, right-sum))
and analyze the time complexity of
MCS algorithm.
Merge sort 9
 The merge sort algorithm follows the divide and conquer paradigm. Intuitively it operates as
follows:
 Divide: Divide the n element sequence to be sorted into two subsequences of n/2 elements each.
 Conquer: Sort the two sequences recursively using merge sort.
 Combine: Merge the two sorted subsequences to produce the sorted answer.
 We note that the recursion “bottoms out” when the sequence to be sorted has length 1, in
which case there is no work to be done, since every sequence of length 1 is already in sorted
order.
Cont… 10
 The key operation of the merge sort algorithm is the merging of two sorted sequences in the
“combine” step.
 To perform the merging we use an auxiliary procedure MERGE (A, p, q, r) where A is an
array and p, q and r are indices numbering elements of the array such that p ≤ q ≤ r. The
procedure assumes that the subarrays A[p ... q] and A[q + 1 ... r] are in the current subarray
A[p ... r].
 We can now use the MERGE procedure as a subroutine in the merge sort algorithm. The
procedure MERGE-SORT (A, p, r) sorts the elements in the subarray A[p ... r]. If p ≥ r, the
subarray has at most one element and is therefore already sorted.
 Otherwise, the divide step simply computes an index q that partitions A[p ... r] into two
subarrays: A[p ... q], containing ⎡n/2⎤ elements, and A[q + 1 ... r], containing ⎣n/2⎦
elements.
Cont… 11
Recursive merge sort algorithm Merging (or combining) procedure
mergeSort (arr, left, right) mergeCombine (arr, left, mid, right)
1. if left < right 1. n1 = mid – left+ 1
2. then mid = ⎣(left + right)/2 ⎦ 2. n2 = right-mid
3. mergeSort(arr, left, mid) 3. Larr [n1 + 1] and Rarr [n2 + 1]
4. mergeSort(arr, mid+1, right) 4. for i = 0 to n1
5. mergeCombine (arr, left, mid, right) 5. L[i] = Larr [left + i]
6. for j = 0 to n2
7. R[j] = Rarr[mid + j+1]
8. L[n1 + 1] = ∞
9. R[n2 + 1] = ∞
10. i = 0 and j = 0
11. for k = left to right
12. if Larr[i] ≤ Rarr[j]
13. arr[k] = Larr[j]
 Characterize the recurrence function 14. i=i+1
and analyze the time complexity of 15. else arr[k] = Rarr[j]
16. j=j+1
merge sort.
Quick sort 12
 Quicksort is another sorting algorithm based on divide-and-conquer paradigm. As opposed
to merge sort and heap sort, quick sort has a relatively bad worst case running time of O(n 2).
 Nevertheless, quicksort has proved to be very fast in practice on small input than other
sorting algorithms, hence the name. Theoretical evidence for this behavior can be provided
by an average case analysis.
 Quicksort is often the best practical choice for sorting because it is remarkably efficient on
the average, its expected running time is θ(nlogn) and also has the advantage of sorting in
place.
Cont… 13
Recursive quick sort algorithm Partition procedure
quickSort (arr, left, right) Partition (arr, left, right)
1. if left < right: 1. pivot = arr[right]
2. iop= Partition(arr, left, right) 2. i = left - 1
3. quickSort(arr, left, iop-1) 3. for j = left to right -1:
4. quickSort(arr, iop+1, right) 4. if arr[j] < pivot:
5. i = i+1
6. swap arr[i] with arr[j]
7. swap arr[ i+1] with pivot
8. return (i+1)

 Characterize the recurrence function and analyze the time complexity of quick sort.
Careful on the worst, best and average cases!

You might also like