0% found this document useful (0 votes)
19 views118 pages

Daa Unit I

Uploaded by

Harinath Eega
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views118 pages

Daa Unit I

Uploaded by

Harinath Eega
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 118

Design and Analysis of Algorithms

CSE III year II sem (R18)


UNIT I

Shumama Ansa
Formal Definition
• An Algorithm is a finite set of instructions that, if followed,
accomplishes a particular task.

• In addition, all algorithms should satisfy the following criteria.

– Input: Zero or more quantities are externally supplied.


– Output: At least one quantity is produced.
– Definiteness: Each instruction is clear and unambiguous.
– Finiteness: If we trace out the instructions of an algorithm, then
for all cases, the algorithm terminates after a finite number of
steps.
– Effectiveness: Every instruction must very basic so that it can be
carried out, in principle, by a person using only pencil & paper.
Areas of study of Algorithm
•How to devise or design an algorithm:
It includes the study of various design techniques and
helps in writing algorithms using the existing design
techniques like divide and conquer.
• How to validate an algorithm:
After the algorithm is written it is necessary to check
the correctness of the algorithm i.e for each input
correct output is produced, known as algorithm
validation. The second phase is writing a program
known as program proving or program verification.
• How to analysis an algorithm: It is known as analysis of
algorithms or performance analysis, refers to the task
of calculating time and space complexity of the
algorithm.
• How to test a program: It consists of two
phases.
1.Debugging is detection and correction of
errors.
2.Profiling or performance measurement is the
actual amount of time required by the
program to compute the result.
Algorithm Specification:

• Algorithm can be described in three ways.


• Natural language like English
• Graphic representation called flowchart:
This method will work well when the algorithm is
small& simple.
• Pseudo-code Method:
In this method, we should typically describe algorithms
as program, which resembles language like Pascal &
algol.
Pseudo-Code for writing Algorithms:
1. Comments begin with // and continue until the end ofline.
2. Blocks are indicated with matching braces{and}.
3. An identifier begins with a letter. The data types of variables
are not explicitly declared.
4. Compound data types can be formed with records. Here is
an example, Node.
Record
{
data type – 1 data-1; . data type – n data – n; node * link;
}
Here link is a pointer to the record type node. Individual
data items of a record can be accessed with à and period.
5. Assignment of values to variables is done
using the assignment statement.
<Variable>:= <expression>;
6. There are two Boolean values TRUE and
FALSE.
7. Logical Operators AND, OR, NOT
Relational Operators <, <=,>,>=, =,!=
8. The following looping statements are employed. For, while and repeat-until
While Loop:

While < condition >do{


<statement-1>
. .
<statement-n>
}
For Loop:

For variable: = value-1 to value-2 step step do


{
<statement-1>
.
.
<statement-n>
}
}
One step is a key word, other Step is used for increment or decrement.
repeat-until:
repeat
{
<statement-1>
.
.
<statement-n>
}until<condition>
9. A conditional statement has the following forms.
– If <condition> then <statement>

If <condition> then <statement-1> Else<statement-2>


Case statement:

Case
{ :<condition-1>:<statement-1>
.
.
:<condition-n>:<statement-n>
:else:<statement-n+1>
}
10.Input and output are done using the
instructions read &write.
11.There is only one type of procedure:
Algorithm, the heading takes the form,
Algorithm Name (<Parameter list>)
As an example, the following algorithm fields & returns
the maximum of ‘n’ given numbers:
Algorithm Max(A,n)
// A is an array of size n
{
Result := A[1]; for I:= 2 to n do
if A[I] > Result then Result :=A[I];
return Result;
}
In this algorithm (named Max), A & n are procedure
parameters. Result & I are Local variables.
Performance Analysis:
• Performance of an algorithm is a process of making evaluative
judgment about algorithms.
• Performance of an algorithm means predicting the resources
which are required to an algorithm to perform its task.
• That means when we have multiple algorithms to solve a
problem, we need to select a suitable algorithm to solve that
problem.
• We compare all algorithms with each other which are solving
same problem, to select best algorithm.
• To compare algorithms, we use a set of parameters or set of
elements like memory required by that algorithm, execution
speed of that algorithm, easy to understand, easy to
implement, etc.
• Generally, the performance of an algorithm depends
on the following elements...

 Whether that algorithm is providing the exact solution for the


problem?
 Whether it is easy to understand?
 Whether it is easy to implement?
 How much space (memory) it requires to solve the problem?
 How much time it takes to solve the problem? Etc.,
• When we want to analyze an algorithm, we consider only the
space and time required by that particular algorithm and we
ignore all remaining elements.
• Performance analysis of an algorithm is the process of
calculating space required by that algorithm and time required by
that algorithm.

• Performance analysis of an algorithm is performed by using


the following measures...

 Space required to complete the task of that algorithm


(Space Complexity). It includes program space and data
space
 Time required to complete the task of that algorithm
(Time Complexity)
• Performance evaluation can be divided into two major
phases.
1. Performance Analysis (machine independent)
• Space Complexity:
The space complexity of an algorithm is the amount of
memory it needs to run for completion.
• Time Complexity:
The time complexity of an algorithm is the amount of
computer time it needs to run to completion.
2 . Performance Measurement (machine dependent).
Space Complexity
• The space complexity of an algorithm is the amount of memory it needs
to run to completion.

• The Space Complexity of any algorithm P is given by S(P)=C+SP(I), C is


constant.
• Fixed Space Requirements (C)
Independent of the characteristics of the inputs and outputs
 It includes instruction space
 space for simple variables, fixed-size structured variable, constants

• Variable Space Requirements (SP(I))


depend on the instance characteristic I
 number, size, values of inputs and outputs associated with I
 recursive stack space, formal parameters, local variables, return
address
• Algorithm 1 : Simple arithmetic
function
Algorithm abc( a,b,c)
{
return a + b + b * c + (a + b - c) / (a + b) +
4.00;
}

• SP(I)=0
• Hence s(p)= constant
• Algorithm 2: Iterative function for sum a
list of numbers
Algorithm sum( list[ ], n)
{
tempsum = 0;
for i = 0 ton do
tempsum += list [i];
return tempsum;
}

• In the above example list[] is dependent on n.


• Hence SP(I)=n.
• The remaining variables are i, n, tempsum each requires one
location. Hence S(P)=3+n
Time Complexity
•The time complexity of an algorithm is the amount of computer
time it needs to run to completion.

• The time T(P) taken by a program P is the sum of the compile


time and the run (or execution)time. The compile time does not
depend on the instance characteristics.

• T(P)=C+TP(I)

• It is combination of
-Compile time (C) independent of instance characteristics
-Run (execution) time TP dependent of instance characteristics

• Time complexity is calculated in terms of program step as it is


difficult to know the complexities of individual operations.
• Algorithm 1 : finding Sum
Algorithm sum( list[ ], n)
{
sum = 0; ---------------------------- 1
for(i=0;i<n;i++) ---------------------n+1
{

sum := sum + list[i]; ------n


}
return sum; --------------------------1
}
HenceT(n)=2n+3
O(n)
Tabular method for computing Time Complexity :

 Complexity is determined by using a table which


includes steps per execution(s/e) i.e amount by
which count changes as a result of execution of the
statement.

 Frequency – number of times a statement is


executed.
Statement s/e Freque Total
ncy steps
Algorithm sum( list[ ], n) 0 - 0

{ 0 - 0

sum := 0; 1 1 1
for i := 0 to n do 1 n+1 n+1

sum :=sum +list [i]; 1 n n


return tempsum;
1 1 1
}
0 0 0
Total 2n+3
Example : Matrix addition
Statement s/e Frequency Total steps

Algorithm add(a,b) 0 - 0
{ 0 - 0
for i:=1 to m do 1 m+1 m+1
for j:=1 to n do 1 m(n+1) mn+m
c[i,j]:=a[i,j]+b[i,j]; 1 mn mn
} 0 - 0

Total 2mn+2m+1
Example : Matrix multiplication
Statement s/e Frequency Total steps

Algorithm mul(a,b,n) 0
0
{
for i:=1 to m do 1 (n+1) (n+1)
forj:=1 to n do
1 n(n+1) (n^2+n)
c[i][j]=0; 1 n*n n^2
for k:=1 to n do
1 n*n*(n+1) n^3+n^2
c[i,j]+:=a[i,k]*b[k,j]; 1 n*n*n n^3
}
Total 2n^3+3n^22
n+1=
O(n^3)
• The worst-case complexity of the algorithm is the
function defined by the maximum number of steps
taken on any instance of size n. It represents the curve
passing through the highest point of each column.

• The best-case complexity of the algorithm is the


function defined by the minimum number of steps taken
on any instance of size n. It represents the curve
passing through the lowest point of each column.

• Finally, the average-case complexity of the


algorithm is the function defined by the average
number of steps taken on any instance of size n.
Examples
for(i=0;i<n;i++) ------------(n+1)
{
Stmt; ---------------(n)
}
-----------------------------------------------------
2n+1
------------------------------------------------------
O(n)
Examples
for(i=n;i>0;i--) ------------(n+1)
{
Stmt; ---------------(n)
}
-----------------------------------------------------
2n+1
------------------------------------------------------
O(n)
Examples
for(i=1;i<n;i=i+2) n=10
{ i=
Stmt; ---------------(n/2)
1
}
3
O(n)
5
7
9
Examples
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
Stmt; ---------------(n^2)
}
}
O(n^2)
Examples
i j Stmt
for(i=0;i<n;i++) 0 0
{ 1 0, 1 1
2 0,1,2 2
for(j=0;j<i;j++) 3 0,1,2,3 3
{ .

Stmt; ---------------(n/2) n O to n-1 N


n (1+2+3+
……n)
=n(n-1)/2
}
O(n^2)
Examples
P=0; i P
for(i=0;p<=n;i++) 1 0+1
{ 2 1+2
p=p+i; 3 1+2+3
}
Assume
P>n
K(k+1)/2 > n k 1+2+3+.+k
K^2>n
K= √n

O(√n)
Examples
for(i=1;i<n;i=i*2) i
1
{
2
Stmt; 4
} 8
Assume 16

i>n k

2^k > n
K=log n
2

O(log n)
Examples
for(i=n;i>=1;i=i/2) i
n
{
n/2
Stmt; n/(2^2)
} n/(2^3)
Assume n/(2^4)

i<1
n/(2^k)
n/(2^k)<1
n= (2^k)
K=log2 n
O(log n)
Types of time functions
• O(1)  constant
• O(log n) logarithmic
• O(n)  linear
• O(n^2)  quadratic
• O(n^3)  cubic
• O(2^n)  exponential
• 1<log n<√n<n<nlog n< n^2< n^3
<……<2^n<3^n…..<n^n
Asymptotic Notations
Asymptotic Notations

Following are the commonly used asymptotic notations to


calculate the running time complexity of an algorithm.

 ΟNotation
 Ω Notation
 θ Notation
Big oh notation: O
Definition
The function f(n)=O(g(n)) (read as “f of n is big oh of g of n”) iff
there exist positive constants c and n0 such that
f(n)≤C*g(n) for all n,n≥0
This notation gives tight upper bound of the given f(n).
The value g(n)is the upper bound value of f(n).
Example:
Consider the following f(n) and g(n)...
f(n) = 3n + 2 and g(n) = n

If we want to represent f(n) as O(g(n)) then it must satisfy


f(n) <= C x g(n) for all values of C > 0 and n0>=
1 f(n) <= C g(n)
⇒3n + 2 <= C n

Above condition is always TRUE for all values of C = 4 and n >=


2. By using Big - Oh notation we can represent the time
complexity as follows...

3n + 2 = O(n)
3n+2=O(n) as
3n+2 ≤4n for alln≥2
Algorithm
Omega notation: Ω
Definition
The function f(n)=Ω (g(n)) (read as “f of n is Omega of g of n”) iff
there exist positive constants c and n0 such that f(n)≥C*g(n) for
all n, n≥0
This notation gives tight lower bound of the given f(n).
The value g(n) is the lower bound value of f(n).
Example:
3n+2=Ω (n) as 3n+2 ≥3n for all n≥1
Consider the following f(n) and g(n)...
f(n) = 3n + 2 and g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n)
>= C g(n) for all values of C > 0 and n0>= 1 f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >=
1.
By using Big - Omega notation we can represent the time
complexity as follows...
3n+2 =Ω(n)
Theta notation: θ
The function f(n)= θ (g(n)) (read as “f of n is theta of g of n”)
iff there exist positive constants c1, c2 and n0 such that
C1*g(n) ≤f(n)≤C2*g(n) for all n, n≥0
Example
Example:
3n+2=θ (n) as
3n+2 ≥3n for all n≥2
3n+2 ≤3n for all n≥2
Here c1=3 and c2=4 and n0=2
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy
C1 g(n)<= f(n) >= C2 g(n) for all values of C1, C2 > 0 and n0>= 1
C1 g(n) <= f(n) >= C2 g(n)
C1 n <= 3n + 2 >= C2 n
Above condition is always TRUE for all values of C1 = 1, C2 = 4
and n >= 1.
By using Big - Theta notation we can represent the time
complexity as follows...
3n + 2 = Θ(n)
Little oh: o
Little oh: o
The function f(n)=o(g(n)) iff
Lim f(n)/g(n)=0 for all n,n≥0
n∞

Little omega: ω

The function f(n)= ω (g(n)) iff


Lim g(n)/f(n)=0 for all n, n≥0
n∞
DIVIDE AND CONQUER
DIVIDE AND CONQUER
GENERAL METHOD
•Given a function to compute on ‘n’ inputs the divide-and-
conquer strategy suggests splitting the inputs into ‘k’ distinct
subsets, 1<k<=n, yielding ‘k’ subproblems.

•These sub problems must be solved, and then a method must


be found to combine sub solutions into a solution of the whole.

•If the sub problems are still relatively large, then the divide-
and- conquer strategy can possibly be reapplied. Often the sub
problems resulting from a divide-and-conquer design are of the
same type as the original problem.
DIVIDE AND CONQUER
• DAndC(Algorithm) is initially invoked as DandC(P), where ‘p’
is the problem to be solved.

•Small(P) is a Boolean-valued function that determines


whether the i/p size is small enough that the answer can be
computed without splitting. If this so, the function ‘S’ is
invoked.

•Otherwise, the problem P is divided into smaller sub


problems.

•These sub problems P1, P2 …Pkare solved by recursive


application of DAndC.
DIVIDE AND CONQUER
•Combine is a function that determines the solution to p
using the solutions to the ‘k’ sub problems. If the size of ‘p’ is n
and the sizes of the ‘k’ sub problems are n1, n2 ….nk,
respectively, then the computing time of DAndC is described
by the recurrence relation.

T(n)= { g(n) n small


T(n1)+T(n2)+……………+T(nk)+f(n) otherwise
; .
Where T(n)is the time for DAndC on any i/p of size ‘n’.
g(n) is the time of compute the answer directly for
small i/ps.
f(n) is the time for dividing P & combining the
solution to sub problems.
Control Abstract
Algorithm DAndC(P)
{
if small(P) then return
S(P); else
{
divide P into smaller instances
P1, P2…Pk, k>=1;
Apply DAndC to each of these sub problems;
return combine (DAndC(P1),
DAndC(P2),…….,DAndC(Pk));
}
}
DIVIDE AND CONQUER
The complexity of many divide-and-conquer algorithms is
given by recurrence relation of the form
T(n) = T(1) n=1
= aT(n/b)+f(n) n>1

Where a & b are known constants.

We assume that T(1) is known & ‘n’ is a power of b(i.e., n=bk)

One of the methods for solving any such recurrence relation

is called the substitution method.


This method repeatedly makes substitution for each
occurrence of the function. T is the right-hand side until all
such occurrences disappear.
APPLICATIONS OF DIVIDE AND
CONQUER
APPLICATIONS
APPLICATIONS OF DIVIDE AND
CONQUER

 Binary search
 Quick sort
 Merge sort
 Strassen’s matrix multiplication.
BINARY SEARCH
• Given a list of n elements arranged in increasing order.

• The problem is to determine whether a given element is


present in the list or not. If x is present then determine the
position of x, otherwise position is zero.

• Divide and conquer is used to solve the problem.

• The value Small(p) is true if n=1.


• S(P)= i, if x=a[i], a[] is an array otherwise S(P)=0.
• If P has more than one element then it can
be divided into sub-problems.
BINARY SEARCH
Choose an index j and compare x with aj. then there 3
possibilities
(i). X=a[j]
(ii) x<a*j+ (x is searched in the list a*1+…a*j-1])
(iii) x>a*j +( x is searched in the list a*j+1+…a*n+).
And the same procedure is applied repeatedly until the
solution is found or solution is zero.
BINARY SEARCH
Algorithm Binsearch(a, n, x)
// Given an array a[1:n] of elements in non-decreasing
//order, n>=0,determine whether ‘x’ is present and
/ / if so, return ‘j’ such that x=a*j+; else return 0.
{
low:=1; high:=n;
while (low<=high) do
{
mid:=[(low + high)/2];
if (x<a[mid]) then
high=mid-1;
else if(x>a[mid]) then
low:=mid+1;
else return mid;
}
return 0;
} //end
Algo Rbinsearch(l,h,key)
{
if(l==h)
{
if(A[l]==key)
return l;
else
return 0;
}
else
{
mid=(l+h)/2;
if(key==A[mid])
return mid;
else if(key<A[mid])
return Rbinsearch(l,mid-1,key);
else
return Rbinsearch(mid+1,h,key);
}
Example
Example
Let us select the 14 entries.
-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.

• Place them in a[1:14], and simulate the steps Binsearch goes


through as it searches for different values of ‘x’.

• Only the variables, low, high & mid need to be traced as we


simulate the algorithm.
• We try the following values for x: 151, -14 and 9.
• For 2 successful searches & 1 unsuccessful search.
• Table shows the traces of Binsearch on these 3 steps.
Example
Array Elements

-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.

X=151

low high mid


1 14 7
8 14 11
12 14 13
14 14 14

Foun
d
Example
Array Elements

-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.

x=-14

low high mid


1 14 7
1 6 3
1 2 1
2 2 2
2 1
Not found
Example
Array Elements

-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.

x=9

low high mid


1 14 7
1 6 3
4 6 5

Found
Time Complexity of binary search
The complexity of binary search is successful
searches is

• Worst case is θ(log n)


• Average case is θ(log n)
• Best case is θ(1)

Unsuccessful search is: θ(log n) for all cases.


QUICKSORT
QUICKSORT
Algorithm QUICKSORT(low, high)
// sorts the elements a(low), . . . . . , a(high) which reside in the
global arrayA(1 :n) into //ascending order a (n + 1) is
considered to be defined and must be greater than all
//elements in a(1 : n); A(n + 1) = α*/
{
If( low < high) then
{
j := PARTITION(a, low,high+1);
// J is the position of the partitioning element
QUICKSORT(low, j –1);
QUICKSORT(j + 1,high);
}
}
Algorithm PARTITION(a, m, p)
{
V :=a(m); i :=m; j:=p;// a (m) is the partition element
repeat
{
repeat
i := i +1;
until (a(i)>v);
repeat
j := j –1;
until (a(j)<v);
if (i < j) then
INTERCHANGE(a, i,j)
} until (i >j);
a[m] :=a[j];
a[j]:=V;
return j;
}
QUICKSORT

Algorithm INTERCHANGE(a, i, j)
{
p:= a[i]; a[i]:=a[j; a[j]:=p;
}
Example

We are given array of n integers to sort:

40 20 10 80 60 50 7 30 100
Pick Pivot Element
There are a number of ways to pick the pivot element. In
this example, we will use the first element in the array:

40 20 10 80 60 50 7 30 100
Partitioning Array
Given a pivot, partition the elements of the array such that
the resulting array consists of:
1. One sub-array that contains elements >= pivot
2. Another sub-array that contains elements < pivot

The sub-arrays are stored in the original a array.

Partitioning loops through, swapping elements below/above


pivot.
Example

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i] <= a[pivot]
++ i

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i] <= a[pivot]
++ i

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<= a[pivot]
++ i

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<= a[pivot]
++ i
2. While a[ j ] > a[pivot]
-- j

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<= a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<= a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i< j

swap a[ i ]and a[ j ]

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<= a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i< j
swap a[ i ]and a[ j ]

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<= a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i< j
swap a[ i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<=
a[pivot]
++ i
2. While-- a[j j ]> a[pivot]
3. If i< j
swap a[ i ]and a[ j ]
4. While j i, go to
> 1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i < a[
swap j i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i < a[
swap j i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i < a[
swap j i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i < a[
swap j i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<=
a[pivot]
++ i
2. While -- a[j j ]> a[pivot]
3. If i< j
swap a[ i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i < a[
swap j i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<=
a[pivot]
++ i
2. While -- a[j j ]> a[pivot]
3. If i< j
swap a[ i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i < a[
swap j i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<=
a[pivot]
++ i
2. While -- a[j j ]> a[pivot]
3. If i< j
swap a[ i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i < a[
swap j i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While a[ i ]<=
a[pivot]
++ i
2. While -- a[j j ]> a[pivot]
3. If i< j
swap a[ i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If i < a[
swap j i ]and a[ j ]
4. While j > i, go to
1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If swap
i < a[j i ]and a[ j ]
4. While j > i, go to
5. Swap a[ j1.]and
a[pivot_index]

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
1. While i ]<=
a[ a[pivot]
++ i
2. While a[ j ]> a[pivot]
-- j
3. If swap
i < a[j i ]and a[ j ]
4. While j > i, go to
5. Swap a[ j1.]and
a[pivot_index]

pivot_index = 4 7 20 10 30 40 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j
Partition Result

7 20 10 30 40 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

<= a[pivot] > a[pivot]


Analysis of Quick Sort
Analysis of Quick Sort

Worst Case Analysis


The pivot is the smallest element, all the time. Then i=0 and if we
ignore T(0)=1, which is insignificant, the recurrences:

T (n) = T (n – 1) + Cn n>1 - (2)


Using equation – (1) repeatedly, thus

T (n – 1) = T (n – 2) + C (n –1)
T (n – 2) = T (n – 3) + C (n –2)
- - - - - - --
T (2) = T (1) + C(2)
Adding up all these equations yields
T (n) =O(n2) - (3)
Analysis of Quick Sort
Best Case Analysis

T (n) = 2 T (n/2) +Cn- (4)


Divide both sides by n and Substitute n/2 for ‘n’

Finally,
Which
yields,

T (n) = C n log n + n = O(n logn)


T (n) =O(n logn)
MERGE SORT
• Another example of divide and conquer
• Given a sequences of n elements, idea is to imagine them split
in to two sets and each set is individually sorted and then
resulting sorted sequences are merged to produce a single
sorted sequence of n elements.
• To sort an array, recursively, sort its left and right halves
separately and then merge them.
• The time complexity of merge sort in the best case, worst case
and average case is O(n log n)
MERGE SORT
Algorithm MERGESORT (low, high)
{
if (low <high)
{
mid := (low +high)/2;//finds where to split the set
MERGESORT(low, mid); //sort one subset
MERGESORT(mid+1, high); //sort the other subset
MERGE(low, mid, high); // combine the results
}

}
MERGE SORT
Algorithm MERGE (low, mid,high)
{
h :=low; i := low; j:= mid + 1;
while((h <mid) and (J <high))do
{
if (a[h] <a[j])then
{
b[i] :=a[h]; h:=h+1;
}
else
{
b[i] :=a[j]; j := j +1;
}
i := i +1;
}
if (h > mid) then
for k := j to high do
{
b[i] := a[k];
i := i +1
}
for k := h to mid do
{
b[i]:= a[K];
i := i +1;
}
for k := low to high do
a[k] :=b[k];
} //end MERGE
Example 1
Analysis of Merge Sort
Analysis of Merge Sort
•For n = 1, the time to merge sort is constant, which we will
be denoted by 1.
• Otherwise, the time to merge sort ‘n’ numbers is equal to the
time to do two recursive merge sorts of size n/2, plus the
time to merge, which is linear.

The equation says this exactly:


T(1) =1
T(n) = 2 T(n/2) +n
T(1) = 1
T(n) = 2T(n/2) + n
next we will solve this recurrence relation. First we divide
by n:

T(n) = 2T(n/2) + n
= 4T(n/4) + n + n
= 8T(n/8) + n + n + n
= ……
= 2k T(n/2K) + kn
= n.T(1) + nlogn
= n + nlogn
O(nlogn)

Hence the complexity of the MergeSort algorithm is O(nlogn).


Strassen’s Matrix Multiplication
Basic Matrix Multiplication
• Suppose we want to multiply two matrices of size
N x N: for example A x B = C.

• C11 = a11b11 + a12b21


• C12 = a11b12 + a12b22
• C21 = a21b11 + a22b21
• C22 = a21b12 + a22b22
for(i=0;i<n;i++) --------------------n
{
for(j=0;j<n;j++)---------------n*n
{
c[i][j]=0; ----------- n*n
for(k=0;k<n;k++)-------n*n*n
{
c[i][j]+=a[i][k]*b[k][j]; ----------n*n*n
}
}
}
Time complexity is O(n3)
a11 a12 a13 a14
a21 a22 a23 a24
a31 a32 a33 a34
a41 a2 A43 a44
Using divide and conquer
A  B = R
A11 A12 B11 B12 A11B11+A12 A11B12+A12
 = B21 B22
A21 A22 B21 B22
A21B11+A22 A21B12+A22
B21 B22
Here, 2*2 matrix is considered as small. Hence divide large matrix into subproblems.
• Divide matrices into sub-matrices: A0 , A1, A2 etc
• Use blocked matrix multiply equations
• Recursively multiply sub-matrices
Algo MM(A,B,n)
{
if(n<=2)
{ C=4 formulas(c11,c12,c21,c22)}
else
{
mid is calculated as n/2
MM(A11,B11,n/2)+MM(A12,B21,n/2);
MM(A11,B12,n/2)+MM(A12,B22,n/2);
MM(A21,B11,n/2)+MM(A22,B21,n/2);
MM(A21,B12,n/2)+MM(A22,B22,n/2);
}

}
T(n) = 1 if n<=2
8T(n/2)+n2 n>2

Time complexity is O(n3)


• Strassen has discovered a way to compute Cij,
requiring 7 matrix multiplications and 18
matrix additions and subtractions
• This method includes computing 7 n/2 * n/2
matrices P, Q, R, S, T, U and V.
• Then cij’s are computed.
• P = (A11 + A22) (B11 + B22)
• Q= (A21 +A22)B11
• R = A11(B12 -B22)
• S= A22 (B21 - B11)
• T = (A11 +A12)B22
• U = (A21 – A11) (B11 + B12)
• V = (A12 – A22) (B21 + B22)
• C11 = P + S – T+V
• C12 = R + T
• C21 = Q +S
• C22 = P + R - Q +U
Strassens’s Matrix Multiplication

P = (A11+ A22)(B11+B22)
Q = (A21 + A22) * B11 C11 = P + S - T + V
R = A11 * (B12 - B22) C12 = R + T
S = A22 * (B21 - B11) C21 = Q + S
T = (A11 + A12) * B22 C22 = P + R - Q + U
U = (A21 - A11) * (B11 + B12)
V = (A12 - A22) * (B21 + B22)
Time Analysis
T(n) = 1 if n<=2
7T(n/2)+n2 n>2

Solving this recurrence relation we get the time


complexity as O(nlog 7) = O(n2.81).

You might also like