0% found this document useful (0 votes)
12 views98 pages

CH 2

CH 2 OF ADA

Uploaded by

Eva Watts'
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views98 pages

CH 2

CH 2 OF ADA

Uploaded by

Eva Watts'
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 98

Analysis and design of

Algorithms

CH-2
Analysis of Algorithm
What is analysis of
Algorithm??
Algorithm-A Algorithm-B
(Existing Algorithm) (New Algorithm)

Compare New Algorithm (Algorithm-A) with existing


algorithm “whether New Algorithm is better than the
Existing algorithm or not”.

Factors to be considered:
1. T ime Complexity 2. SpaceComplexity

3. Extra Resources 4. Easy to Understand


5. Easy to implement etc.
Different Approaches for Analysis

Empirical Approach :
The empirical approach to choosing algorithm consists of programming the
competing techniques and trying them on different instances with the help of a
computer.

Theoretical Approach :
The theoretical approach, consists of determining mathematically the quantity of
resources needed by each algorithm as a function of the size of the instances
considered.
Advantage of the theoretical approach is that it depends on neither the computer
being used, nor the programming language, nor even the skill of the programmer.
It save both the time that would have been spent needlessly programming an
inefficient algorithm and the machine time that would have been wasted testing it.

Hybrid Approach :
Here the form of the function describing the algorithms efficiency is determine
theoretically, and then any required numerical parameters are determined
empirically for a particular program and machine, usually by some kind of
regression.
Cont. .
. Efficiency of algorithm can be specified by using time complexity
and space complexity.
 Time Complexity :
Time complexity of an algorithm means the amount of time taken
by an algorithm to run.
 Space Complexity:
Space complexity of algorithm means amount of space taken an
algorithm to rum.

How to find time complexity of algorithm?


Time complexity of algorithm can be computed using frequency
count.
Frequency count is a count that denotes how many times particular
statement is executed.
Cont . . .
 Frequency count :
The efficiency of program is measured by inserting a count in the
algorithm in order to count the number of times the basic
operation is executed. This is straightforward method of
measuring the time complexity of the program.

Example:
Consider following piece of code for obtaining the
frequency
count –
void
{ int a,Display()
b, c;
a = 10;
b = 20;
c = a + b;
printf(“%d”,c);
}
Analyzing Control Statement
 While analyzing the given fragment of
code various programming construct appear.
Such as :
1. Sequencing
2. For Loop
3. While and repeat
4. Recursive Calls
1. Sequencing :
void Display()
{ int a, b, c;
a = 10;
b = 20;
c = a + b;
printf(“%d”,c);
}

CODE Frequency Count


a = 10 1
b = 20 1
c =a+b 1
printf (“%d”, c ) 1
Total 4

O(1)
2. For Loop
for(i=1; i<=n; i++)
for(j=1; j<=n; j++)
x = x + 1;
CODE Frequency Count
i =1 1
i <=n n +1
i++ n
j =1 n
j <= n n (n+1)
j++ n*n
x = x +1 n*n
Total 3n2 + 4n + 2

O(n2 )
3. While and repeat
i = 1;
while (i <= n)
{ x = x+1;
i=
} i+1;

CODE Frequency Count

i =1 1
i <=n n +1
x = x +1 n
i=i+1 n
Total 3n + 2

O (n
)
Find
the frequency count of
1) for(i=n; i>0;i--)
{
statements
}

Findthe frequency count of


2) for(i=1; i<=n;
i=i+2)
{
statements
}
Best , Worst and Average Case Analysis
Best Case:
 If an algorithm takes minimum amount of time to run to completion
for a specific set of input then it is called Best Case Time Complexity.
 Example :
While searching a particular element by using Sequential Search we
get the desired element at first place itself then it is called best case
time complexity.
Example:

88 55 20
2 66 30
3 23
2 3
3
0 0 3
Searching Key = 23

88 5
5 20
2 6
6 30
3 23
2 3
3
0 0 3

Searching Key = 8 (Best Case)


Best , Worst and Average Case Analysis
Worst Case:
 If an algorithm takes maximum amount of time to run to
completion for a specific set of input then it is called Worst Case
Time Complexity.
 Example:
While searching a particular element by using sequential search
method, if desired element is placed at the end of the list then we
get the Worst Case Time Complexity.

Example:

88 55 20
2 66 30
3 23
2 33
0 0 3

Searching Key = 3 (Worst Case)


Cont . .
Average Case Time Complexity:
 This type of complexity gives information about the behavior
of an algorithm on specific or random input. Let us understand
some terminologies that are required for computing average
case time complexity.
◦ Let the algorithm for Sequential Search and
 P be a probability of getting successful search.
n is the total number of elements in the list.

The first match of the element will occur at ith location.


Hence
probability of occurring first match is P/n for every ith element.
The probability of getting unsuccessful search is (1-P).
Now we can find average case time complexity Cavg(n) as –
Cavg(n) = Probability of successful search (for element
one to n in the list) + Probability of unsuccessful search.
Cavg(n) = Probability of successful search (for element one to n in
the list) + Probability of unsuccessful search.
Cavg (n) = [1*(P/n) + 2*(P/n) + . . . . .+i*P/n) ] + n*(1-P)
= P/n [1 + 2 + . . . . .+ i . . .n] + n*(1-P)
= P/n (n(n+1)/2)) + n(1-P)
= P(n+1)/2 + n(1-P)
Thus we obtain the general formula for computing average case time
complexity.
Suppose if P=0 that means there is no successful search i.e. we have
scanned the entire list of n elements and still we do not found the desired
element in the list then in such situation,
Cavg(n) = 0(n+1)/2 + n(1-0)
=n
Thus average case running time complexity becomes equal to n.
Suppose P=1 i.e. we get a successful search then
Cavg(n) = 1(n+1)/2 + n(1-1)
=(n + 1)/2
That means the algorithm scans about half of the elements from the list.
Asymptotic notation (most imp)
Asymptotic Notation is used in Computer Science to describe
the performance or complexity of an algorithm.
There are different types of asymptotic notations.
1) The Order of (Big Oh) “O” (Upper Bound)
2) Omega “Ω” (Lower Bound)
3) Theta “Ɵ” (Average Bound)

Upper Bound

f(n)

Lower
Bound
Big-oh notation
• It is a method of representing the upper bound of algorithm’s
running time.
• Using Big-oh notation we can give longest amount of time taken
by the algorithm to complete.
• Definition:
Let f(n) and g(n) be two non-negative functions.
Let n0 and constant c are two integers such that n0 denotes
some value of input and n  n0 . Similarly c is some constant such
that c > 0. We can write
f(n)  c * g(n)
Big-oh notation
f(n) is big-oh of g(n)
It is denoted as f(n)   (g(n))
In other words f(n) is less than g(n) if g(n) is multiple of some
constant c.
Example
f(n) = n3 and g(n) = 500 n2
 To prove g(n) is in order of f(n) we have to prove that there exist a positive real
constant c and an integer threshold no such that g(n) <= c*f(n) whenever n >= no.
f(n)
n=1 f(n)
f(1) = 13 = 1
g(1) = 500*1=500
g(
n=2 n)
f(2) = 23 = 8
g(2) = 500* 22=20000 g(n
)
n=3
f(3) = 33 = 27
g(3) = 500*32=4500
.
.
n=500
50 no
f(500) = 5003 = 125000000
g(500) = 500*5002=125000000 So0we can write :
g(n) <= f(n) , for every value of n which is
n=501
3
n >= no and c=1, So g(n) ∈ O(f(n))
Omega notation
• This notation is used to represent the lower bound of algorithm’s
running time.
• Using omega notation we can denote shortest amount of time
taken by algorithm.
• Definition:
a function f(n) is said to be in  (g(n)) if f(n) is bounded below
by some positive constant multiple of g(n) such that
f(n)  c * g(n) for all n  n0
Omega notation
• It is denoted as f(n)   (g(n)).
• f(n) is greater than g(n) if g(n) is multiply by some constant c.
Theta notation
• By this method the running time is between upper bound and
lower bound.
• Definition:
A function f(n) and g(n) be two non negative functions. There are
two positive constants namely c1 and c2 such that,
c1 * g(n)  f(n)  c2 * g(n)
• f(n)   (g(n))
Example
•Following term is true or false? Justify your answer.
• 5n2-6 ∈  (n2)
Properties of Order of Growth
1. If f1(n) is order of g1(n) and f2(n) is order of g2(n),
then
f1(n)+f2(n)∈O(max(g1(n),g2(n)). It is also known as
Maximum rule.
2. Polynomials of degree m∈ Ɵ (nm).
That means maximum degree is consider from
the polynomial.
Example : a1n3 + a2n2 + a3n +c has the order
of growth Ɵ(n3).
3. O(1) < O(log n) < O(n) < O(n2) < O(2n).
4. Exponential functions an have
different orders of growth for different values of
a.
Rate of common computing time function
Recurrence Equation
 The recurrence equation is the equation that defines a
sequence recursively.

 It is normally in following term.


T(n) = n * T(n-1) for n>=1
T(0) = 1

Example : Find factorial of any


number in C using recursion.
Recursion Function
Find factorial of any number in C using recursion
#include<stdio.h> int fact(int n)
#include<conio.h> {
void main() if(n>=1)
{ { return n*fact(n-1);
int n,ans; }
clrscr();
else
printf("Enter No");
{
scanf("%d",&n);
return 1;
ans=fact(n); }
} }

Fact(n) = n * Fact(n-1) for n>=1


Fact(0) = 1

If we write T in place of Fact then


T(n) = n * T(n-1) for n>=1
T(0) = 1
Cont…
Solving Recurrence relation:
We have two methods :
1. Substitution Method
a) Forward Substitution
b) Backward Substitution
2. Master Method

3. Substitution Method
a) Forward Substitution :
This method makes use of an initial condition
in the initial term and value for the next term is
generated.
This process is continued until some formula
is guessed.
Example
 Solve a recurrence relation
T(n) = T(n-1) +n
with initial condition T(0) = 0.

T(n) = n(n+1)/2
Let, T(n) = T(n-1) + n If n=1 then T(1) = 1(1+1)/2
If n=1 then T(1) = 2/2 = 1
If n=2 then T(2) = 2(2+1)/2
= 6/2 = 3
= T(0) +1 If n=3 then T(3) = 3(3+1)/2
= = 12/2 = 6

+ sequence we can derive a


By observing this
formula
1
T(n) = n(n+1)/2 = n2/2 + n/2
T(n) = O( n2 ) =
Example : Backward Substitution Method
In this method backward values are substituted recursively in order to
derive some formula.
 Solve following recurrence relation :
T(n) = T(n-1) +n , With initial condition T(0) =0

Now Consider:
T(n) = T(n-1) +n (1)
replace n with n-1 in equetion-1
Here, T(n-1) = T(n-2) +n-1 (2)
Putting this value in equation (1).
T(n) = T(n-2) + n-1 + n (3)
replace n with n-2 in equation-1
Now, T(n-2) = T(n-3) + n-2 (4)
Putting value in equation (3)
T(n) = T(n-3) + n-2 + n-1 + n
. .. . . . .. .... .
Cont . .
. T(n) = T(n-3) + n-2 + n-1 + n
.............
= T(n-k) + T(n-k+1) +
T(n-k+2) + … +n
if k = n
T(n) = T(0) + 1+2+3+4+……n
= 0+1+2+3+4 +….. n

So, T(n) = n(n+1)/2 = n2/2 +


n/2
T(n) = O( n2 )
Master Method

T(n) = a T(n/b) + (nk log p n)


Where a >= 1 , b > 1 , k >= 0 , p is real number.

Example : T(n) = 4T(n/2)+n


For this recurrence equation Master Method can be
applied.
Because, in above recurrence put a=4,b=2,k=1,p=0.
T(n) = a T(n/b) + (nk log p n)
= 4 T(n/2) + (n1 log0 n)
T(n) = 4 T(n/2) + n
 Solve T(n) = 4T(n/2)+n
Here, a=4, b=2, k=1, p=0
case-1 is true that is 4 >
21 T(n) = (nlog24)
T(n) = (n2 )
 Solve T(n) = T(n/2)+1
Here, a=1, b=2, k=0, p=0.
case-2(a) is true that is 0 > -1
T(n) = (nlog21 log 0+1 n)
T(n) = (log n)
 Solve T(n) = 8T(n/2) + n2
Here, a=8, b=2, k=2, p=0.
case-1 is true that is 8 > 22
T(n) = (nlog28)
T(n) = (n3 )
Exampl
es
Solve T(n) = 4T(n/2)+n
Answer : T(n) = (n2 )

T(n)
= T(n/2)+1
Answer : T(n) = (log
n)

T(n)= 8T(n/2) + n2
Answer : T(n) =
(n3 )
Cont
….
T(n) = 2T(n/2) + n log n
Answer : T(n) =  (n log2 n)
T(n) = 4T(n/2) + log n
Answer : T(n) = (n2 )

Explain master theorem and solve following


recurrence equation with master method.

1. T(n) = 9T(n/3) + n
Answer : T(n) = (n2 )

2. T(n) = 3T(n/4) + n lg n) (GTU Winter-2014, Marks-


07) Answer : T(n) = (n lg n )
Amortized
Analysis
 Amortized analysis means finding average running time
per operation over a worst case sequence of operations.
 An amortized analysis indicate that average cost of a
single operation is small if average of sequence of operations is
obtained.
 An amortized analysis guarantees the time per operation over
worst case performance.
There are three commonly used techniques :
1) Aggregate analysis
2) Accounting method
3) Potential method
Aggregate analysis
Accounting Method
 The accounting method is based on the charges that are
assigned to each operation. The idea of accounting method
is as follows-
◦ Assign different charges to different operations.
◦ The amount of charges is called amortized cost(ci’).
◦ Then there is actual cost(ci) of each operation. The amortized cost
can be more or less than actual cost.
◦ When amortized cost > actual cost, the difference is saved in
specific objects called credits.
◦ When amortized cost <actual cost the
stored credits are utilized.
◦ Thus accounting method we can say
Amortized cost = actual cost + credits (either deposited or used
up)
Cont…
 Following are the conditions used in accounting method-
 Let Ci be the actual cost of ith operation in the
sequence, and Ci’ be the amortized cost of ith operation
in the sequence.
Accounting Method

Actual cost(ci):O(n) and amortized cost (ci’): O(n)


Potential Function method
• In this method there will not be any credit but there will be
some potential “energy” or “potential” which can be used to
pay for future operations.
• Instead of associating potential with specific object it is
associated with the data structure as a whole.
• Let D0 be the initial data structure. Hence for n operations D0,
D1, D2,..., Dn will be the data structure. Let c1,c2,...,cn denotes
the actual cost.
• Let Φ is a potential function which is real number. Φ(Di​) is
called potential of Di.
• ai be the amortized cost of ith operation. Ci is actual cost.
​ ai=ci​+Φ(Di​)−Φ(Di−1​).
• So, that means that over n operations, the total amortized cost will be

• In this method, it is required that Φ(Di​)≥Φ(D0​) for all i to prove that


the total amortized cost of n operations is an upper bound on the actual
total cost.
• A typical way to do this is to defined Φ(D0​)=0 and show that Φ(Di​)≥0.
• Over the course of the sequence of operations, the ith operation will
have a potential difference of Φ(Di​)−Φ(Di−1​).
• If this value is positive, then the amortized cost ai​is an overcharge for
this operation, and the potential energy of the data structure will
increase. If it is negative, it is an undercharge, and the potential energy
of the data structure will decrease.
• Let's look back at the modified stack. The potential function chosen
will simply be the number of items on the stack. Therefore, before the
sequence of operations begins, Φ(D0​)=0 because there are no items in
the stack. For all future operations, it's clear that Φ(Di​)≥0 because
there cannot be a negative number of items in a stack.
Potential Function method

Actual cost(ci):O(n) and amortized cost (ci’): O(n)


Sorting
 Sorting is a process of arranging the data in some specific order.
Ascending Order: If elements are arranged in increasing order then it
is called ascending order.
Descending Order: If elements are arranged in decreasing order then it
is called descending order.

Sorting Techniques:
1) Insertion Sort
2) Selection Sort
3) Bubble Sort
4) Shell sort
5) Heap sort,
Sorting in linear time :
6) Bucket sort,
7) Radix sort and Counting sort
Insertion sort
Example
 Consider Array of 5 element.
 So we need maximum 4 passes to sort the elements.
5 , 4 , 3 , 1 , 2
Insertion sort Algorithm

Procedure insert(T[1…..n])
For i  2 to n do
x  T[i] ;
ji–1
while j > 0 and x
< T[j] do
T[j+1]  T[j]
jj–1
T[j + 1]  x
Selection Sort
52 8 27 3 1

Min Min Min Min Min


52 8 8 3 1

52 8 27 3 1 n-1
comparison
M
C i
n

1 8 27 3 52
M
C i
n
1 8 27 3 52
M
C i
n

1 8 27 3 52 n-2
comparison
M
C i
n

1 3 27 8 52
M
C i
n
1 3 27 8 52
M
C i
n

n-3
1 3 27 8 52 comparison

M
C i
n

1 3 8 27 52
M
C i
n
So time complexity of Selection Sort
T(n) = (n-1) + (n-2) + (n-3)…..
= n(n-1)/2
= O(n2)
Selection Sort
CAlgorithm
function for Selection Sort
void Selectionsort(int
A[10])
{ int i, j, Min, temp;
for(i=0;
{ i<=n-2;
Min = i;i+
+) for( j=i+1; j<=n; j++ )
{ if(A[j] <
A[Min]) { Min =
j;
} }
temp = A[i];
A[i] =
T(n) є O(n2)
A[Min]
} A[Min] =
} temp;
Bubble Sort Algorithm
There are n-1 passes in Bubble Sort
Time Complexity
Bubble Sort
Algorithm
 There are numbers of Passing in bubble sort to
sort
numbers in to ascending order.
C function for Bubble Sort:
{
void int i, j, m, temp;
bubblesort(int a[ ], int n)
for(i=0; i<n-1; i++)
{ for(j=0; j<n-i-1; j++)
{ if(a[j] > a[j+1])
{
temp = a[j];
a[j] = a[j+1]
a[j+1] = temp;
}
}
}
}
Example
52, 1, 27, 85, 66, 23, 13, 57

14 , 33 , 27 , 10 , 35 , 19 , 42,
44
Radix Sort
 The code for radix sort is straight forward. The following
procedure assume that each element in the n-element array A
has d digit , where digit 1 is the lowest-order digit and digit d is
the highest order digit.
Algorithm :
1. Read the total number of elements in the array.
2. Store the unsorted elements in the array.
3. Now the simple procedure is to sort the elements by digit by
digit.
4. Sort the elements according to the last digit then the second
last digit and so on. Example : 123 , 567 , 841 , 692
5. Thus the elements should be sorted for up to the
most significant bit.
6. Store the sorted element in the array and print them.
7. Stop.
Example - 1

329 , 457 , 657 , 839 , 436 , 720 ,


355
329 720 720 329
457 355 329 355
657 436 436 436
839 457 839 457
436 657 355 657
720 329 457 720
355 839 657 839
Example - 2

132 , 45 , 365 , 3 , 43 , 72 ,
5
132 132 003 003
045 072 005 005
365 003 132 043
003 043 043 045
043 045 045 072
072 365 365 132
005 005 072 365
Bucket sort
 Bucket sort runs in linear time when the input is drawn from a
uniform distribution.

 Bucket sort is a sorting technique in which array is


partitioned into buckets. Each bucket is then sorted
individually, using some other sorting algorithm such
as
insertion sort.

 Algorithm:
1. Set up an array of initially empty buckets.
2. Put each element in corresponding bucket.
3. Sort each non empty bucket.
4. Visit the buckets in order and put all the elements into a
sequence and print them.
Example

9 , 32 , 49 , 4 , 39 , 7 , 22 , 16 , 19 , 42
1 9 1 9,4,7 1 to 10
2 32 2 16 , 19 11 to 20
3 49 3 22 21 to 30
4 4 32 , 39
4 31 to 40
5 39 5 49 , 42 41 to 50
6 7 6 51 to 60
7 22
7 61 to 70
8 16 8 71 to 80
9 19 9 81 to 90
10 42 10 91 to 100
Sort all using insertion sort
1 9 1 94, 4 , 77 9 1 to 10
2 32 2 16, 9 19
16 11 to 20
3 49 1
3 22
22 21 to 30
4 4 32, 9 39
32
4 3 31 to 40
5 39 5 42, 4249
49 41 to 50
6 7 6 51 to 60
7 22
7 61 to 70
8 16 8 71 to 80
9 19 9 81 to 90
10 42 10 91 to 100

4 7 9 16 19 22 32 39 42 49
HEAP SORT
 The heap data structure is an array object that can be
viewed as a nearly complete binary tree. Each node of the
tree corresponds to an element of the array that stores the
value in the node.

 Properties:
Structure property: A heap is a completely filled binary
tree with the exception of the bottom row, which is filled
from left to right

Heap Order property: For every node x in the heap, the


parent of x greater than or equal to the value of x.
(known as a maxHeap).
Examp
le 16

14 10

8 7 9 3

2 4 1

Binary Tree

16 14 10 8 7 9 3 2 4 1

An array from tree


Continue
… Types of binary heap:
Max heap: In a max heap, the max-heap property is that for
every node i other than the root,
A[PARENT(i)] >= A[i]
Thus the largest element in a max-heap is stored at the root.

Mean-heap :
Mean-heap property is that for every node i other than the root,
A[PARENT(i)] <= A[i]
The smallest element in a min-heap is at the root.
Maintaining the heap property
MAX-HEAPIFY( A , i )
1. l  LEFT (i)
2. r  RIGHT(i)
3. if l <= heap-size[A] and A[l] > A[i]
4. then largest  l
5. else largest  i
6. if r <= heap-size[A] and A[r] > A[largest]
7. then largest  r
8. if largest != i
9. then exchange A[i]  A[largest]
10. MAX-HEAPIFY (A, largest)
Example
16 The action of MAX-HEAPIFY( A ,
4 10 2 ), where heap-size[A] = 10

14 7 9 3

2 8 1
16
Fig.
A 14 10

4 7 9 3

2 8 1
16 Fig. B
14 10

8 7 9 3

2 4 1
Fig. C
Building a
heap :
BUILD-MAX-HEAP(A)
1. heap-size[A]  length[A]
2. for i  [length[A]/2] downto 1
3. do MAX-HEAPIFY (A , i)

Example : 4 1 3 2 16 9 10 14 8
7

1 1
4 4
2 3 2 3
1 3 1 3

4 2 5 4 2 5
16 6 9 10 7 16 6 9 10 7

8 14 8 7 10 8 14 8 7 10
9 9
1 1
4 4
2 3 2 3
1 3 1 10

4 14 5 5
16 6 9 10 7 4 14 16 6 9 3 7

8 2 8 7 10 8 2 8 7 10
9 9

1 1
4 16
2 3 2 3
16 10 14 10

4 14 5 7 4 8 5 7
6 9 3 7 6 9 3 7

8 2 8 1 10 8 2 4 1 10
9 9
Heap Sort algorithm
HEAPSORT(A)
1. BUILD-MAX-HEAP(A)
2. for i  length[A] downto 2
3. do exchange A[1]  A[i]
4. heap-size[A]  heap-size[A] – 1
5. MAX-HEAPIFY (A , 1)

16 Exchan 1
ge
14 10 14 10

8 7 9 3 8 7 9 3

2 4 1 2 4 16
Examp
le 1
MAX
14

14 10 H EA PI 8 10
FY
8 7 9 3 4 7 9 3

2 4 16 2 1 16 i

10 9

8 9 8 3

4 7 1 3 4 7 1 2
i
2 14 16 10 14 16
8 7

7 3 4 3

4 2 1 9 1 2 i 8 9

10 14 16 10 14 16

4 3

2 3 2 1

1 i 7 8 9 i 4 7 8 9

10 14 16 10 14 16
2

1
i 3
4 7 8 9

10 14 16

i 2 3

4 7 8 9

10 14 16

1 2 3 4 7 8 9 10 14 16
Counting
Sort
 Link
 http://www.cs.miami.edu/home/burt/lear
ni

ng/Csc517.091/workbook/countingso
rt.ht ml
Counting S o r t
 Counting sort assumes that each of the n input elements is an integer in
the range 0 to k, for some integer k. When k=O(n), the sort runs in (n)
time.
 In general idea of counting algorithm, the input is an array A[1…n], and
thus length[A] = n. Two other arrays : the array B[1…n] holds the sorted
output, and the array C[0…k] provides temporary working storage.
 ALGORITHM:
1. for i  0 to k
2. do C[i]  0
3. for j  1 to length[A]
4. do C[A[j]]  C[A[j]] + 1
5. for i  1 to k
6. do C[i]  C[i] + C[i – 1]
7. for j  length[A] downto 1
8. do B[C[A[j]]]  A[j]
9. C[A[j]]  C[A[j]] – 1
Time Complexity : O(n+k)
Examp
le
1,3,7,8,1,1,3
Shell Sort
Algorithm
 Step 1 − Initialize the value of h
 Step 2 − Divide the list into smaller sub-list of
equal
interval h
 Step 3 − Sort these sub-lists using insertion sort
 Step 3 − Repeat until complete list is sorted

Interval (h) = floor ( h/2k )

Example :
See video
Example
61 , 109 , 149 , 111 , 34 , 2 , 24 , 119 , 122 , 125 , 27 , 145
T h a n k You
GTU QUESTIONS

Explain why analysis of algorithms is important? Explain: Worst Case, Best


Case and Average Case Complexity with suitable example.—4 marks

Define Algorithm, Time Complexity and Space Complexity—3 marks

Write the Master theorem. Solve following recurrence using it. –7


marks
(i)T(n)= T(n/2) + 1
(ii) T(n)=2T(n/2) + n log n

OR

(c) Solve following recurrence relation using iterative method T(n) =


T(n - 1) + 1 with T(0) = 0 as initial condition. Also find big oh
notation

What is asymptotic notation? Find out big-oh notation of the f(n)=


3n2+5n+10 --4 marks

You might also like