CH 2
CH 2
Algorithms
CH-2
Analysis of Algorithm
What is analysis of
Algorithm??
Algorithm-A Algorithm-B
(Existing Algorithm) (New Algorithm)
Factors to be considered:
1. T ime Complexity 2. SpaceComplexity
Empirical Approach :
The empirical approach to choosing algorithm consists of programming the
competing techniques and trying them on different instances with the help of a
computer.
Theoretical Approach :
The theoretical approach, consists of determining mathematically the quantity of
resources needed by each algorithm as a function of the size of the instances
considered.
Advantage of the theoretical approach is that it depends on neither the computer
being used, nor the programming language, nor even the skill of the programmer.
It save both the time that would have been spent needlessly programming an
inefficient algorithm and the machine time that would have been wasted testing it.
Hybrid Approach :
Here the form of the function describing the algorithms efficiency is determine
theoretically, and then any required numerical parameters are determined
empirically for a particular program and machine, usually by some kind of
regression.
Cont. .
. Efficiency of algorithm can be specified by using time complexity
and space complexity.
Time Complexity :
Time complexity of an algorithm means the amount of time taken
by an algorithm to run.
Space Complexity:
Space complexity of algorithm means amount of space taken an
algorithm to rum.
Example:
Consider following piece of code for obtaining the
frequency
count –
void
{ int a,Display()
b, c;
a = 10;
b = 20;
c = a + b;
printf(“%d”,c);
}
Analyzing Control Statement
While analyzing the given fragment of
code various programming construct appear.
Such as :
1. Sequencing
2. For Loop
3. While and repeat
4. Recursive Calls
1. Sequencing :
void Display()
{ int a, b, c;
a = 10;
b = 20;
c = a + b;
printf(“%d”,c);
}
O(1)
2. For Loop
for(i=1; i<=n; i++)
for(j=1; j<=n; j++)
x = x + 1;
CODE Frequency Count
i =1 1
i <=n n +1
i++ n
j =1 n
j <= n n (n+1)
j++ n*n
x = x +1 n*n
Total 3n2 + 4n + 2
O(n2 )
3. While and repeat
i = 1;
while (i <= n)
{ x = x+1;
i=
} i+1;
i =1 1
i <=n n +1
x = x +1 n
i=i+1 n
Total 3n + 2
O (n
)
Find
the frequency count of
1) for(i=n; i>0;i--)
{
statements
}
88 55 20
2 66 30
3 23
2 3
3
0 0 3
Searching Key = 23
88 5
5 20
2 6
6 30
3 23
2 3
3
0 0 3
Example:
88 55 20
2 66 30
3 23
2 33
0 0 3
Upper Bound
f(n)
Lower
Bound
Big-oh notation
• It is a method of representing the upper bound of algorithm’s
running time.
• Using Big-oh notation we can give longest amount of time taken
by the algorithm to complete.
• Definition:
Let f(n) and g(n) be two non-negative functions.
Let n0 and constant c are two integers such that n0 denotes
some value of input and n n0 . Similarly c is some constant such
that c > 0. We can write
f(n) c * g(n)
Big-oh notation
f(n) is big-oh of g(n)
It is denoted as f(n) (g(n))
In other words f(n) is less than g(n) if g(n) is multiple of some
constant c.
Example
f(n) = n3 and g(n) = 500 n2
To prove g(n) is in order of f(n) we have to prove that there exist a positive real
constant c and an integer threshold no such that g(n) <= c*f(n) whenever n >= no.
f(n)
n=1 f(n)
f(1) = 13 = 1
g(1) = 500*1=500
g(
n=2 n)
f(2) = 23 = 8
g(2) = 500* 22=20000 g(n
)
n=3
f(3) = 33 = 27
g(3) = 500*32=4500
.
.
n=500
50 no
f(500) = 5003 = 125000000
g(500) = 500*5002=125000000 So0we can write :
g(n) <= f(n) , for every value of n which is
n=501
3
n >= no and c=1, So g(n) ∈ O(f(n))
Omega notation
• This notation is used to represent the lower bound of algorithm’s
running time.
• Using omega notation we can denote shortest amount of time
taken by algorithm.
• Definition:
a function f(n) is said to be in (g(n)) if f(n) is bounded below
by some positive constant multiple of g(n) such that
f(n) c * g(n) for all n n0
Omega notation
• It is denoted as f(n) (g(n)).
• f(n) is greater than g(n) if g(n) is multiply by some constant c.
Theta notation
• By this method the running time is between upper bound and
lower bound.
• Definition:
A function f(n) and g(n) be two non negative functions. There are
two positive constants namely c1 and c2 such that,
c1 * g(n) f(n) c2 * g(n)
• f(n) (g(n))
Example
•Following term is true or false? Justify your answer.
• 5n2-6 ∈ (n2)
Properties of Order of Growth
1. If f1(n) is order of g1(n) and f2(n) is order of g2(n),
then
f1(n)+f2(n)∈O(max(g1(n),g2(n)). It is also known as
Maximum rule.
2. Polynomials of degree m∈ Ɵ (nm).
That means maximum degree is consider from
the polynomial.
Example : a1n3 + a2n2 + a3n +c has the order
of growth Ɵ(n3).
3. O(1) < O(log n) < O(n) < O(n2) < O(2n).
4. Exponential functions an have
different orders of growth for different values of
a.
Rate of common computing time function
Recurrence Equation
The recurrence equation is the equation that defines a
sequence recursively.
3. Substitution Method
a) Forward Substitution :
This method makes use of an initial condition
in the initial term and value for the next term is
generated.
This process is continued until some formula
is guessed.
Example
Solve a recurrence relation
T(n) = T(n-1) +n
with initial condition T(0) = 0.
T(n) = n(n+1)/2
Let, T(n) = T(n-1) + n If n=1 then T(1) = 1(1+1)/2
If n=1 then T(1) = 2/2 = 1
If n=2 then T(2) = 2(2+1)/2
= 6/2 = 3
= T(0) +1 If n=3 then T(3) = 3(3+1)/2
= = 12/2 = 6
Now Consider:
T(n) = T(n-1) +n (1)
replace n with n-1 in equetion-1
Here, T(n-1) = T(n-2) +n-1 (2)
Putting this value in equation (1).
T(n) = T(n-2) + n-1 + n (3)
replace n with n-2 in equation-1
Now, T(n-2) = T(n-3) + n-2 (4)
Putting value in equation (3)
T(n) = T(n-3) + n-2 + n-1 + n
. .. . . . .. .... .
Cont . .
. T(n) = T(n-3) + n-2 + n-1 + n
.............
= T(n-k) + T(n-k+1) +
T(n-k+2) + … +n
if k = n
T(n) = T(0) + 1+2+3+4+……n
= 0+1+2+3+4 +….. n
T(n)
= T(n/2)+1
Answer : T(n) = (log
n)
T(n)= 8T(n/2) + n2
Answer : T(n) =
(n3 )
Cont
….
T(n) = 2T(n/2) + n log n
Answer : T(n) = (n log2 n)
T(n) = 4T(n/2) + log n
Answer : T(n) = (n2 )
1. T(n) = 9T(n/3) + n
Answer : T(n) = (n2 )
Sorting Techniques:
1) Insertion Sort
2) Selection Sort
3) Bubble Sort
4) Shell sort
5) Heap sort,
Sorting in linear time :
6) Bucket sort,
7) Radix sort and Counting sort
Insertion sort
Example
Consider Array of 5 element.
So we need maximum 4 passes to sort the elements.
5 , 4 , 3 , 1 , 2
Insertion sort Algorithm
Procedure insert(T[1…..n])
For i 2 to n do
x T[i] ;
ji–1
while j > 0 and x
< T[j] do
T[j+1] T[j]
jj–1
T[j + 1] x
Selection Sort
52 8 27 3 1
52 8 27 3 1 n-1
comparison
M
C i
n
1 8 27 3 52
M
C i
n
1 8 27 3 52
M
C i
n
1 8 27 3 52 n-2
comparison
M
C i
n
1 3 27 8 52
M
C i
n
1 3 27 8 52
M
C i
n
n-3
1 3 27 8 52 comparison
M
C i
n
1 3 8 27 52
M
C i
n
So time complexity of Selection Sort
T(n) = (n-1) + (n-2) + (n-3)…..
= n(n-1)/2
= O(n2)
Selection Sort
CAlgorithm
function for Selection Sort
void Selectionsort(int
A[10])
{ int i, j, Min, temp;
for(i=0;
{ i<=n-2;
Min = i;i+
+) for( j=i+1; j<=n; j++ )
{ if(A[j] <
A[Min]) { Min =
j;
} }
temp = A[i];
A[i] =
T(n) є O(n2)
A[Min]
} A[Min] =
} temp;
Bubble Sort Algorithm
There are n-1 passes in Bubble Sort
Time Complexity
Bubble Sort
Algorithm
There are numbers of Passing in bubble sort to
sort
numbers in to ascending order.
C function for Bubble Sort:
{
void int i, j, m, temp;
bubblesort(int a[ ], int n)
for(i=0; i<n-1; i++)
{ for(j=0; j<n-i-1; j++)
{ if(a[j] > a[j+1])
{
temp = a[j];
a[j] = a[j+1]
a[j+1] = temp;
}
}
}
}
Example
52, 1, 27, 85, 66, 23, 13, 57
14 , 33 , 27 , 10 , 35 , 19 , 42,
44
Radix Sort
The code for radix sort is straight forward. The following
procedure assume that each element in the n-element array A
has d digit , where digit 1 is the lowest-order digit and digit d is
the highest order digit.
Algorithm :
1. Read the total number of elements in the array.
2. Store the unsorted elements in the array.
3. Now the simple procedure is to sort the elements by digit by
digit.
4. Sort the elements according to the last digit then the second
last digit and so on. Example : 123 , 567 , 841 , 692
5. Thus the elements should be sorted for up to the
most significant bit.
6. Store the sorted element in the array and print them.
7. Stop.
Example - 1
132 , 45 , 365 , 3 , 43 , 72 ,
5
132 132 003 003
045 072 005 005
365 003 132 043
003 043 043 045
043 045 045 072
072 365 365 132
005 005 072 365
Bucket sort
Bucket sort runs in linear time when the input is drawn from a
uniform distribution.
Algorithm:
1. Set up an array of initially empty buckets.
2. Put each element in corresponding bucket.
3. Sort each non empty bucket.
4. Visit the buckets in order and put all the elements into a
sequence and print them.
Example
9 , 32 , 49 , 4 , 39 , 7 , 22 , 16 , 19 , 42
1 9 1 9,4,7 1 to 10
2 32 2 16 , 19 11 to 20
3 49 3 22 21 to 30
4 4 32 , 39
4 31 to 40
5 39 5 49 , 42 41 to 50
6 7 6 51 to 60
7 22
7 61 to 70
8 16 8 71 to 80
9 19 9 81 to 90
10 42 10 91 to 100
Sort all using insertion sort
1 9 1 94, 4 , 77 9 1 to 10
2 32 2 16, 9 19
16 11 to 20
3 49 1
3 22
22 21 to 30
4 4 32, 9 39
32
4 3 31 to 40
5 39 5 42, 4249
49 41 to 50
6 7 6 51 to 60
7 22
7 61 to 70
8 16 8 71 to 80
9 19 9 81 to 90
10 42 10 91 to 100
4 7 9 16 19 22 32 39 42 49
HEAP SORT
The heap data structure is an array object that can be
viewed as a nearly complete binary tree. Each node of the
tree corresponds to an element of the array that stores the
value in the node.
Properties:
Structure property: A heap is a completely filled binary
tree with the exception of the bottom row, which is filled
from left to right
14 10
8 7 9 3
2 4 1
Binary Tree
16 14 10 8 7 9 3 2 4 1
Mean-heap :
Mean-heap property is that for every node i other than the root,
A[PARENT(i)] <= A[i]
The smallest element in a min-heap is at the root.
Maintaining the heap property
MAX-HEAPIFY( A , i )
1. l LEFT (i)
2. r RIGHT(i)
3. if l <= heap-size[A] and A[l] > A[i]
4. then largest l
5. else largest i
6. if r <= heap-size[A] and A[r] > A[largest]
7. then largest r
8. if largest != i
9. then exchange A[i] A[largest]
10. MAX-HEAPIFY (A, largest)
Example
16 The action of MAX-HEAPIFY( A ,
4 10 2 ), where heap-size[A] = 10
14 7 9 3
2 8 1
16
Fig.
A 14 10
4 7 9 3
2 8 1
16 Fig. B
14 10
8 7 9 3
2 4 1
Fig. C
Building a
heap :
BUILD-MAX-HEAP(A)
1. heap-size[A] length[A]
2. for i [length[A]/2] downto 1
3. do MAX-HEAPIFY (A , i)
Example : 4 1 3 2 16 9 10 14 8
7
1 1
4 4
2 3 2 3
1 3 1 3
4 2 5 4 2 5
16 6 9 10 7 16 6 9 10 7
8 14 8 7 10 8 14 8 7 10
9 9
1 1
4 4
2 3 2 3
1 3 1 10
4 14 5 5
16 6 9 10 7 4 14 16 6 9 3 7
8 2 8 7 10 8 2 8 7 10
9 9
1 1
4 16
2 3 2 3
16 10 14 10
4 14 5 7 4 8 5 7
6 9 3 7 6 9 3 7
8 2 8 1 10 8 2 4 1 10
9 9
Heap Sort algorithm
HEAPSORT(A)
1. BUILD-MAX-HEAP(A)
2. for i length[A] downto 2
3. do exchange A[1] A[i]
4. heap-size[A] heap-size[A] – 1
5. MAX-HEAPIFY (A , 1)
16 Exchan 1
ge
14 10 14 10
8 7 9 3 8 7 9 3
2 4 1 2 4 16
Examp
le 1
MAX
14
14 10 H EA PI 8 10
FY
8 7 9 3 4 7 9 3
2 4 16 2 1 16 i
10 9
8 9 8 3
4 7 1 3 4 7 1 2
i
2 14 16 10 14 16
8 7
7 3 4 3
4 2 1 9 1 2 i 8 9
10 14 16 10 14 16
4 3
2 3 2 1
1 i 7 8 9 i 4 7 8 9
10 14 16 10 14 16
2
1
i 3
4 7 8 9
10 14 16
i 2 3
4 7 8 9
10 14 16
1 2 3 4 7 8 9 10 14 16
Counting
Sort
Link
http://www.cs.miami.edu/home/burt/lear
ni
ng/Csc517.091/workbook/countingso
rt.ht ml
Counting S o r t
Counting sort assumes that each of the n input elements is an integer in
the range 0 to k, for some integer k. When k=O(n), the sort runs in (n)
time.
In general idea of counting algorithm, the input is an array A[1…n], and
thus length[A] = n. Two other arrays : the array B[1…n] holds the sorted
output, and the array C[0…k] provides temporary working storage.
ALGORITHM:
1. for i 0 to k
2. do C[i] 0
3. for j 1 to length[A]
4. do C[A[j]] C[A[j]] + 1
5. for i 1 to k
6. do C[i] C[i] + C[i – 1]
7. for j length[A] downto 1
8. do B[C[A[j]]] A[j]
9. C[A[j]] C[A[j]] – 1
Time Complexity : O(n+k)
Examp
le
1,3,7,8,1,1,3
Shell Sort
Algorithm
Step 1 − Initialize the value of h
Step 2 − Divide the list into smaller sub-list of
equal
interval h
Step 3 − Sort these sub-lists using insertion sort
Step 3 − Repeat until complete list is sorted
Example :
See video
Example
61 , 109 , 149 , 111 , 34 , 2 , 24 , 119 , 122 , 125 , 27 , 145
T h a n k You
GTU QUESTIONS
OR