0% found this document useful (0 votes)
9 views3 pages

Alg Unit III

The document discusses various algorithm design techniques, focusing on Divide and Conquer, Dynamic Programming, and Greedy Techniques. It explains the Divide and Conquer methodology with examples like finding maximum and minimum elements, Merge Sort, and Quick Sort, detailing their processes and complexity analyses. Additionally, it covers key elements of Dynamic Programming and Greedy Techniques, highlighting their applications in solving optimization problems.

Uploaded by

durga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views3 pages

Alg Unit III

The document discusses various algorithm design techniques, focusing on Divide and Conquer, Dynamic Programming, and Greedy Techniques. It explains the Divide and Conquer methodology with examples like finding maximum and minimum elements, Merge Sort, and Quick Sort, detailing their processes and complexity analyses. Additionally, it covers key elements of Dynamic Programming and Greedy Techniques, highlighting their applications in solving optimization problems.

Uploaded by

durga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

UNIT III ALGORITHM DESIGN TECHNIQUES

Divide and Conquer methodology: Finding maximum and minimum - Merge sort - Quick sort
Dynamic programming: Elements of dynamic programming — Matrix-chain multiplication -
Multi stage graph — Optimal Binary Search Trees. Greedy Technique: Elements of the greedy
strategy - Activity-selection problem –- Optimal Merge pattern — Huffman Trees.

******************************************************************************

DIVIDE AND CONQUER ALGORITHM DEFINITION:

Divide and Conquer Algorithm involves breaking a larger problem into smaller sub
problems, solving them independently, and then combining their solutions to solve the original
problem. The basic idea is to recursively divide the problem into smaller sub problems until they
become simple enough to be solved directly. Once the solutions to the sub problems are
obtained, they are then combined to produce the overall solution.

1.Divide:

 Break down the original problem into smaller sub problems.


 Each sub problem should represent a part of the overall problem.
 The goal is to divide the problem until no further division is possible.

2. Conquer:

 Solve each of the smaller sub problems individually.


 If a sub problem is small enough (often referred to as the “base case”), we solve it
directly without further recursion.
 The goal is to find solutions for these sub problems independently.

3. Merge:

 Combine the sub-problems to get the final solution of the whole problem.
 Once the smaller sub problems are solved, we recursively combine their solutions to get
the solution of larger problem.
 The goal is to formulate a solution for the original problem by merging the results from
the sub problems.

FINDING MAXIMUM AND MINIMUM

1. Finding the maximum element in the array:

We can use Divide and Conquer Algorithm to find the maximum element in the array by
dividing the array into two equal sized sub arrays, finding the maximum of those two individual
halves by again dividing them into two smaller halves. This is done till we reach sub arrays of

1
size 1. After reaching the elements, we return the maximum element and combine the sub arrays
by returning the maximum in each sub array.

2. Finding the minimum element in the array:

Similarly, we can use Divide and Conquer Algorithm to find the minimum element in the array
by dividing the array into two equal sized sub arrays, finding the minimum of those two
individual halves by again dividing them into two smaller halves. This is done till we reach sub
arrays of size 1. After reaching the elements, we return the minimum element and combine the
sub arrays by returning the minimum in each sub array.

3. Finding the maximum and minimum element in the array:

In Divide and Conquer approach:

Step 1: Find the mid of the array.

Step 2: Find the maximum and minimum of the left sub array recursively.

Step 3: Find the maximum and minimum of the right sub array recursively.

Step 4: Compare the result of step 3 and step 4

Step 5: Return the minimum and maximum.

MERGE SORT:

Merge sort is a popular sorting algorithm known for its efficiency and stability. It follows the
divide-and-conquer approach to sort a given array of elements.

Here’s a step-by-step explanation of how merge sort works:

Divide: Divide the list or array recursively into two halves until it can no more be divided.

Conquer: Each subarray is sorted individually using the merge sort algorithm.

Merge: The sorted subarrays are merged back together in sorted order. The process continues
until all elements from both subarrays have been merged.

Complexity Analysis of Merge Sort

Time Complexity:

Best Case: O(n log n), When the array is already sorted or nearly sorted.

Average Case: O(n log n), When the array is randomly ordered.

Worst Case: O(n log n), When the array is sorted in reverse order.

2
Auxiliary Space: O(n), Additional space is required for the temporary array used during
merging.

QUICK SORT:

Quick Sort works on the principle of divide and conquer, breaking down the problem into
smaller sub-problems.

There are mainly three steps in the algorithm:

Choose a Pivot: Select an element from the array as the pivot. The choice of pivot can vary (e.g.,
first element, last element, random element, or median).

Partition the Array: Rearrange the array around the pivot. After partitioning, all elements
smaller than the pivot will be on its left, and all elements greater than the pivot will be on its
right. The pivot is then in its correct position, and we obtain the index of the pivot.

Recursively Call: Recursively apply the same process to the two partitioned sub-arrays (left and
right of the pivot).

Base Case: The recursion stops when there is only one element left in the sub-array, as a single
element is already sorted.

Complexity Analysis of Quick Sort

Time Complexity:

Best Case: (Ω(n log n)), Occurs when the pivot element divides the array into two equal halves.

Average Case (θ(n log n)), On average, the pivot divides the array into two parts, but not
necessarily equal.

Worst Case: (O(n²)), Occurs when the smallest or largest element is always chosen as the pivot
(e.g., sorted arrays).

Auxiliary Space: O(n), due to recursive call stack

You might also like