0% found this document useful (0 votes)
9 views

DAA Module 4

The document discusses sorting algorithms, specifically Merge Sort and Quick Sort, detailing their methodologies, algorithms, and time complexities. It also covers NP-Hard and NP-Complete problems, explaining the classifications of computational problems based on their solvability in polynomial time. The document provides definitions and examples to illustrate the concepts of tractable and intractable problems, along with the relationship between NP-Hard and NP-Complete problems.

Uploaded by

cocfalco81
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

DAA Module 4

The document discusses sorting algorithms, specifically Merge Sort and Quick Sort, detailing their methodologies, algorithms, and time complexities. It also covers NP-Hard and NP-Complete problems, explaining the classifications of computational problems based on their solvability in polynomial time. The document provides definitions and examples to illustrate the concepts of tractable and intractable problems, along with the relationship between NP-Hard and NP-Complete problems.

Uploaded by

cocfalco81
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

MODULE IV

1. MERGE SORT

2. QUICK SORT

3. NP-HARD AND NP-COMPLETE PROBLEMS

PraveenKumar P K, UIT Kollam


1. Merge Sort
Merge sort is a sorting technique based on divide and conquer technique. With worst-case
time complexity being Ο(n log n), it is one of the most respected algorithms. Merge sort
first divides the array into equal halves and then combines them in a sorted manner. It
include these steps
Step 1: Divide – The whole list is divided into two sublists of n/2 elements each for
sorting.
Step 2: Conquer – Sort the sublist recursively
Step 3: Combine – Now merge the two sorted sublists to generate the sorted answer
Merge Sort Working
To understand merge sort, we take an unsorted array as the following −

We know that merge sort first divides the whole array iteratively into equal halves unless
the atomic values are achieved. We see here that an array of 8 items is divided into two
arrays of size 4.

This does not change the sequence of appearance of items in the original. Now we divide
these two arrays into halves.

We further divide these arrays and we achieve atomic value which can no more be divided.

Now, we combine them in exactly the same manner as they were broken down.
We first compare the element for each list and then combine them into another list in a
sorted manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in
the target list of 2 values we put 10 first, followed by 27. We change the order of 19 and
35 whereas 42 and 44 are placed sequentially.

In the next iteration of the combining phase, we compare lists of two data values, and
merge them into a list of found data values placing all in a sorted order.

PraveenKumar P K, UIT Kollam


After the final merging, the list should look like this −

The following diagram shows the complete merge sort process for an example array {38, 27,
43, 3, 9, 82, 10}. If we take a closer look at the diagram, we can see that the array is
recursively divided in two halves till the size becomes 1. Once the size becomes 1, the merge
processes comes into action and starts merging arrays back till the complete array is
merged.

Algorithm
Merge sort keeps on dividing the list into equal halves until it can no more be divided. By
definition, if it is only one element in the list, it is sorted. Then, merge sort combines the
smaller sorted lists keeping the new list sorted too.
Step 1 − if it is only one element in the list it is already sorted, return.
Step 2 − divide the list recursively into two halves until it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.
For accomplishing the whole task we are using two Procedures ‘MergeSort’ and ‘Merge’

PraveenKumar P K, UIT Kollam


Procedure MergeSort(A, start, finish)
Array A contain n number of elements. Variable ‘length’ and ‘mid’ refer to the number of
elements in the current sub list and position of the middle element of the sublist.
Step 1 Computation, size of current sublist
Set length <- finish – start + 1
Step 2 Condition checking, if length is one
if (length <= 1) then return
Step 3 Calculating middle point
Set mid <- (length / 2) - 1
Step 4 Solving first sublist, recursively
Call MergeSort(A, start, mid)
Step 5 Solving second sublist, recursively
Call MergeSort(A, mid + 1, finish)
Step 6 Merging two sublists, sorted
Call Merge (A, start, mid + 1, finish)
Step 7 Finish
return
Procedure Merge (A, first, second, third)
Step 1 Initialisation
Set n <- 0, f <- first, s <- second
Step 2 Comparison, giving smallest element
if (A[f] <= A[s] ) then
set n <- n + 1
set temp[n] <- A[f]
set f <- f + 1
else
set n <- n + 1
set temp[n] <- A[s]
set s <- s + 1
Step 3 Copying remaining elements
if( f >= second ) then
repeat while (s <= third )
set n <- n + 1
set temp[n] <- A[s]
set s <- s + 1
end loop
else

PraveenKumar P K, UIT Kollam


repeat while (f < second )
set n <- n + 1
set temp[n] <- A[f]
set f <- f + 1
end loop
Step 4 Copying element to original array
Repeat for f <- 1, 2, …, n
A[first – 1 + f] <- temp [f]
End loop
Step 5 Finish
Analysis
Let us take an array of size n is used for merge sort. Because here we take elements in pair
and merge with another pair after sorting. So merge sort requires maximum log 2 n passes.
Each pass, total number of comparison can be maximum n. Hence we can say merge sort
require maximum n*log2n comparisons which is of O(nlog2n). The main disadvantage of merge
sort is space requirement it requires extra space of O(n).
Time complexity of Merge Sort is O(n log n)in all 3 cases (worst, average and best) as merge
sort always divides the array in two halves and take linear time to merge two halves.
2. Quick Sort
Quick sort technique is based on the Divide and Conquer design technique. In this at every
step each element is placed in its proper position. It performs very well on a longer list. It
work recursively, by first selecting a random pivot value from the list. Then it partitions
the list into elements that are less than the pivot and greater than the pivot.
Given an array Q[p…r], on the basis of Divide and Conquer quick sort works as follows
 Divide: Q[p…r] ito Q[p….q] and Q[q+1….r] all have q determined as a part of the
division.
 Conquer: Q[p….q] and Q[q+1….r] are then sorted recursively.
 Combine: None, as all the leaves sorted array in place.
The process for sorting the elements through quick sort is as
1. Take the first element of list as pivot.
2. Place pivot as the proper place in list. So one element of the list ie pivot will be at its
proper place
3. Create two sublists left and right side of pivot.
4. Repeat the same process until all the element of list are at proper position in list.
For placing the pivot at proper place we have a need to do the following process
1. Compare the pivot element one by one from right to left for getting the element
which has value less than pivot element.

PraveenKumar P K, UIT Kollam


2. Interchange the element with pivot element.
3. Now the comparison will start from the interchanged element position from left to
right for getting the element which has higher value than pivot.
4. Repeat the same process until pivot is at its proper position.
Algorithm
Procedure QuickSort(Q, p ,r)
This procedure sorts the given list of elements recursively. It makes a recursive call to
QuickSort. For partitioning the list it makes a call to Partition Procedure
Step 1 Checking
if(p < r) then
Step 2 Calling Procedure Partition
Set q <- call to Partition(Q, p, r)
Step 3 First sublist
Call to QuickSort(Q, p ,q)
Step 4 Second sublist
Call to QuickSort(Q, q+1 ,r)
Procedure Partition(Q, p, r)
Step 1 Initialization
Set x <- Q[p]
Set i <- p
Set j <- r + 1
Step 2 Checking
while (1)
while(Q[i] <= x and Q[i]!=x)
set i <- i + 1
while(Q[j] > x and Q[j]!=x)
set j <- j + 1
if(i < j) then
set Q[i] <-> Q[j]
else
return j
End while (1)
Example. Consider a list (50, 40, 20, 60, 80, 100, 45, 70, 105, 30, 90, 75) we have to sort
the list using Quick sort sorting technique.
Solution: Take the first element 50 as pivot
50 40 20 60 80 100 45 70 105 30 90 75

PraveenKumar P K, UIT Kollam


Scanning from right to left, the first number visited that has value less than 50 is 30. Thus
exchange both of them.
30 40 20 60 80 100 45 70 105 50 90 75
Scanning from left to right, the first number visited that has value greater than 50 is 60.
Thus, exchange both of them.
30 40 20 50 80 100 45 70 105 60 90 75
Scanning from right to left, the first number visited that has value less than 50 is 45, so
exchange both of them.
30 40 20 45 80 100 50 70 105 60 90 75
Scanning from left to right, the first number visited that has value greater than 50 is 80,
exchange both of them.
30 40 20 45 50 100 80 70 105 60 90 75
After scanning the number 50 is placed in its proper position and we get two sublists Sublist
I and Sublist II. Sublist II has value greater than 50 while Sublist I has lesser values. The
whole process is repeated for both the sublist so obtained.
30 40 20 45 50 100 80 70 105 60 90 75
Sublist I Sublist II
Thus after applying the same method again and again for new sublists until we get the
elements that cannot be sorted further, the final list that we get is the sorted list.
20 30 40 45 50 60 70 75 80 90 100 105
Analysis
Time requirement of quick sort depends on the position of pivot in the list, how pivot is
dividing list into sublist. It may be equal division of list or may be it will not divide also.
Average Case: In average case we assume that list is equally divided means list is equally
into two sublist, these two sublist in four sublists and so on. So the total number of sublist
at aparticular level will be 2l-1. So total number of steps will be log2n. The number of
comparison at any level will be maximum n. So we can run tie of quick sort will be of
O(nlogn).
Worst Case: Suppose list of element are already in sorted order. When we find the pivot
then it will be first element. So here it produce only 1 sublist which is on right side of first
elementstart from second element. Similarly other sublist will be created only at right side.
The number of comparison for first element is n, second element requires n-1 comparison
and so on. So total number of comparison will be n + (n-1) + (n-2)+ ---- + 3 + 2 + 1 which is of
O(n2)

PraveenKumar P K, UIT Kollam


3. NP-HARD AND NP-COMPLETE PROBLEMS
Basic concepts
For many of the problems we know and study, the best algorithms for their solution have
computing times can be clustered into two groups
1. Solutions are bounded by the polynomial
 Problems whose solution is bounded by a polynomial of small degree.
 It also called “Tractable Algorithms”.
 Most Searching & Sorting algorithms are polynomial time algorithms. Examples
Ordered Search (O (log n)), Polynomial evaluation O(n) Sorting O(n.log n)
2. Solutions are bounded by a non polynomial
 Problems with solution times not bound by polynomial (simply non polynomial).
 These are hard or intractable problems.
 None of the problems in this group has been solved by any polynomial time
algorithm.
 Examples Travelling Sales Person O(n2 2n), Knapsack O(2n/2)
No one has been able to develop a polynomial time algorithm for any problem in the 2 nd group
Theory of NP-Completeness
The theory of the NP-Completeness does not provide any method of obtaining polynomial
time algorithms for the problems of the second group. “Many of the problems for which
there is no polynomial time algorithm available are computationally related”.
There are two classes of non-polynomial time problems
1. NP-Complete: have the property that it can be solved in polynomial time if all other
NP-Complete problems can be solved in polynomial time.
Def: A problem that is NP-Complete can solved in polynomial time if and only if (iff) all
other NP-Complete problems can also be solved in polynomial time.
2. NP-Hard: If an NP-hard problem can be solved in polynomial time then all NP-
Complete problems can be solved in polynomial time.

“All NP-Complete problems are NP-Hard but not all NP-Hard


NP-Hard problems are not NP-Complete.”
NP-Complete problems are subclass of NP-Hard
NP-
Nondeterministic Algorithms Complete
Algorithms with the property that the result of
every operation is uniquely defined are termed as
deterministic algorithms. Such algorithms agree with
the way programs are executed on a computer. Relationship between NP-Hard &
Algorithms which contain operations whose outcomes NP-Complete

PraveenKumar P K, UIT Kollam


are not uniquely defined but are limited to specified set of possibilities. Such algorithms
are called nondeterministic algorithms.
To specify nondeterministic algorithms, there are 3 new functions.
 Choice(S) arbitrarily chooses one of the elements of sets S
 Failure () Signals an Unsuccessful completion
 Success () Signals a successful completion.
The assignment X = choice(1:n) could result in X being assigned any value from the integer
range[1..n]. There is no rule specifying how this value is chosen. The failure and success
signals are used to define a computation of the algorithm. These statements are equivalent
to a stop statement and cannot be used to effect a return.
Whenever there is a set of choices that leads to a successful completion then one such set
of choices is always made and the algorithm terminates.
A Nondeterministic algorithm terminates unsuccessfully if and only if (iff) there exists no
set of choices leading to a successful signal.
Example 1 Consider the problem of searching for an element x in a given set of elements
A(l:n), n >= 1. We are required to determine an index j such that A (j) = x or j = 0 if x is not
in A. A nondeterministic algorithm for this is
j <- choice(l:n)
if A(j) = x then print(j); success end if
print('O'); failure
From the way a nondeterministic computation is defined, it follows that the number 'O' can
be output if and only if there is no j such that A (j) = x. The above algorithm is of
nondeterministic complexity 0(1).
Example 2 Nondeterministic Knapsack algorithm
Algorithm DKP(p, w, n, m, r, x) p given Profits
{ w given Weights
W:=0; n Number of elements (number of p or w)
P:=0; m Weight of bag limit
for i:=1 to n do PFinal Profit
{ WFinal weight
x[i]:=choice(0, 1);
W:=W+x[i]*w[i];
P:=P+x[i]*p[i];
}
if( (W>m) or (P<r) ) then Failure();
else Success();
}

PraveenKumar P K, UIT Kollam


The Classes NP-Hard & NP-Complete:
Decision problem/ Decision algorithm: Any problem for which the answer is either zero or
one is decision problem. Any algorithm for a decision problem is termed a decision algorithm.
Optimization problem/ Optimization algorithm: Any problem that involves the
identification of an optimal (either minimum or maximum) value of a given cost function is
known as an optimization problem. An optimization algorithm is used to solve an optimization
problem.
 P  is the set of all decision problems solvable by deterministic algorithms in
polynomial time.
 NP  is the set of all decision problems solvable by nondeterministic algorithms in
polynomial time.
Since deterministic algorithms are just a special case of nondeterministic, by this we
concluded that P ⊆ NP
The most famous unsolvable problems in Computer Science is
Whether P=NP or P≠NP
In considering this problem, s.cook formulated the following
question.
If there any single problem in NP, such that if we showed it to be
in ‘P’ then that would imply that P=NP.
Cook answered this question with Commonly believed
Theorem: Satisfiability is in P if and only if (iff) P=NP relationship between
α Notation of Reducibility P & NP

Let L1 and L2 be problems, Problem L1 reduces to L2


(written L1 α L2) iff there is a way to solve L1 by a
deterministic polynomial time algorithm using a
deterministic algorithm that solves L2 in polynomial
time
This implies that, if we have a polynomial time
algorithm for L2, Then we can solve L1 in polynomial
time. Commonly believed relationship
among P, NP, NP-Complete and NP-
Here α  is a transitive relation i.e., L1 α L2 and L2 α
Hard
L3 then L1 α L3

A problem L is NP-Hard if and only if (iff) satisfiability reduces to L ie., Statisfiability α L


A problem L is NP-Complete if and only if (iff) L is NP-Hard and L Є NP
Most natural problems in NP are either in P or NP-complete.

PraveenKumar P K, UIT Kollam


Q) What is Stable Sorting?
A sorting algorithm is said to be stable if two objects with equal keys appear in the same

order in sorted output as they appear in the input unsorted array. Some sorting algorithms

are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting

algorithms are not, like Heap Sort, Quick Sort, etc.

Quick Sort Program

void quicksort(int a[], int p, int r)


{
if(p < r){
int q;
q = partition(a, p, r);
quicksort(a, p, q);
quicksort(a, q+1, r);
}
}
int partition(int a[], int p, int r)
{
int i, j, pivot, temp;
pivot = a[p];
i = p;
j = r;
while(1) {
while(a[i] < pivot && a[i] != pivot)
i++;
while(a[j] > pivot && a[j] != pivot)
j--;
if(i < j) {
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
else {
return j;
}
}
}

PraveenKumar P K, UIT Kollam

You might also like