0% found this document useful (0 votes)
5 views

Analysis of Algorithm-Mmr1

Uploaded by

wiekdowow
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Analysis of Algorithm-Mmr1

Uploaded by

wiekdowow
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Dr. M.M.

Raghuwanshi
Ph. D. (Computer Sc.) VNIT, Nagpur
M.Tech. (Computer Sc. & DP) IIT, Kharagpur
Director and Professor, Symbiosis Institute of Technology,
Nagpur
Email: [email protected]
FUNDAMENTALS OF ALGORITHMIC PROBLEM SOLVING

• Understanding the problem


Asking questions, do a few examples by hand, think about special cases,
etc.
• Deciding on
Exact vs. approximate problem solving
Appropriate data structure
• Design an algorithm: Efficient & effective
• Proving correctness
• Analyzing an algorithm
Time efficiency : how fast the algorithm runs
Space efficiency: how much extra memory the algorithm needs.
Simplicity and generality
• Coding an algorithm
• Testing of Algorithm
THE CLASSIC MULTIPLICATION ALGORITHM

Multiplication, the American way:


981 1234 1234

490 2468

245 4936 4936

122 9872

61 19744 19744
30 39488

Multiplication, the English way: 15 78976 78976


7 157952 157952
3 315904 315904
1 631808 631808

1210554
This algorithm is called as multiplication
á la russe, resembles the one used in
the hardware of a binary computer
THE CLASSIC MULTIPLICATION ALGORITHM
Multiplication, the American way: int american(){
int n1=981, n2=1234, prod=0;
int i=1,d;
while(n2){
d=n2%10;
n2=n2/10;
prod+=(n1*d*i);
i*=10;
}
return prod;
}

int english(){
int n1=981, n2=1234, prod=0;
Multiplication, the English way: int n=n2,i=1,d;
while(n){
n=n/10;
i*=10;
}
while(i!=1){
i=i/10;
d=n2/i;
prod=prod+(d*n1*i);
//printf("d=%d\ti=%d\tprod=%d\n",d,i,prod);
n2=n2%i;
}
return prod;
}
ALGORITHM

An algorithm, named after the ninth century scholar


Abu Jafar Muhammad Ibn Musu Al-Khowarizmi
“A great Iranian mathematician, geographer and stronomer. He introduced the zero,
negative numbers, algebra, and the decimal system to the West. He also invented
mathematical programming using a set of instructions to perform complex calculations.
The term algorithm is named after a variation of his name, Algorithmi. “

Algorithm is defined as follows: Roughly speaking:

• Set of rules for carrying out calculation either by hand or on a machine.


• A finite step-by-step procedure to achieve a required result.
• A sequence of computational steps that transform the input into the output.
• A sequence of operations performed on data that have to be organized in data structures.
• An abstraction of a program to be executed on a physical machine (model of
Computation).
ALGORITHM

Algorithm must satisfy following criteria

Input: Zero or more quantities are externally supplied.

Output: At least one quantity is produce.

Definiteness: Each instruction is clear and unambiguous (e.g add salt to suit your test and
add 1 teaspoon salt).

Finiteness: if we trace out the instructions of an algorithm, then for all cases, the algorithm
terminates after a finite number of steps.
(Program may or may not be finite)

Effectiveness: Every instruction must be very basic so that it can be carried out, in
principle, by a person using only pencil and paper.
PERFORMANCE ANALYSIS OF ALGORITHM

Two important ways to characterize the effectiveness of an algorithm:


 Space Complexity: the amount of memory it needs to run to completion.

 Time complexity: the amount of computer time is needed to run to completion.

Memory and Processor are the two important part of Computer System
REASON FOR STUDY OF ALGORITHM

Suppose computer were infinitely fast and computer memory was


free. Would you have any reason to study algorithms?

Answer : YES

We need to demonstrate
definiteness,
finiteness and
effectiveness
of offered solutions that is algorithms

Computing time and memory space are bounded resources.

A wise use of these resources make algorithm more efficient.

The total system performance depends on choosing efficient algorithm as much as


choosing fast and rich hardware.
ANALYZING ALGORITHMS

Analysing algorithm means predicting the resources that the algorithm required.

Unlike programs, algorithms are not dependent on a particular programming language,


machine, system, or compiler.

They are mathematical entities, which can be thought of as running on some sort of
idealized computer with an infinite random access memory and an unlimited word size
(with sufficient computation power)

Once the model of computation has been defined, an algorithm can be describe using a
simple language (or pseudo language) whose syntax is close to programming language
such as C or java.
RAM MODEL

RAM (Random Access Machine) model has:


• all instructions operate in serial (no concurrence, no parallel computation);
• all atomic operations like addition, multiplication, subtraction, read compare store etc.
take unit time, and
• all atomic data (chars, ints, double, pointer etc) take up unit space.

• has one processor


• executes one instruction at a time
• each instruction takes "unit time“
• has fixed-size operands, and
• has fixed size storage (RAM and disk).
RUNNING TIME OF AN ALGORITHM

The most natural measure for instance size is the number of items in the instance but the
best measure of instance size is the total number of bits needed (e.g. instead of talking
about multiplication of two numbers we should talk about number of digits in number) to
represent the instance.

The running time of an algorithm on a particular instance is the number of primitive


operations or “steps” executed (e.g. additions and multiplications in our case) .

It is convenient to define notation of step so that it is as machine-independent as possible.


The number of state any program statement is assigned depends on the kind of statement:

 comments count as zero state


 Simple assignment statement as one step
 in an iterative statement such as for, while and do-while statements, we consider step
counts only for the control part (condition statement)
 Exact time required for step is unimportant
 Step can be executed in unit cost
FOR EXAMPLE

Algorithm to find Area and Circumference or Perimeter of a circle

Algorithm Steps Frequency Total steps For sequential algorithm (no


Read dia 1 1 1 loops): Total no of steps is a
area=3.14 *(dia/2)*(dia/2) 1 1 1 constant integer value

Perimeter=2*3.14*(dia/2) 1 1 1 Time complexity: Constantan or


Print area, perimeter 1 1 1 O(1)
4

#include<stdio.h>
#define pi 3.14
int main()
{
float dia, area, perimeter;
printf("Eneter Diameter of a Circle:");
scanf("%f",&dia);
area=pi*(dia/2)*(dia/2);
perimeter=2*pi*(dia/2);
printf("Area:%5.2f\tPerimeter:%5.2f\n",area, perimeter);
return 0;
}
FOR EXAMPLE
Algorithm to sum members of a list
Statement Step frequency Total steps
1. Algorithm Sum(list, n) 0 -- 0
2. { 0 -- 0
3. sum=0.0; 1 1 1
4. for i = 1 to n do 1 n+1 n+1
5. sum=sum + list[i]; 1 n n
6. return sum; 1 1 1
7. } 0 -- 0
Step Count 2n + 3

Polynomial equation for step count is 2n+3


#include<stdio.h>
#define N 5 Execution time depends on n
int list[N]={3,6,8,2,9};
int main()
Time complexity: O(n)
{
int sum=0,i;
For N=25
for (i=0;i<N;i++)
{
sum+=list[i];
}
printf("Sum of numbers in a the list:%d\n",sum); For N=100
return 0;
}
FOR EXAMPLE
Algorithm for addition of two matrices
Statement s/e frequency Total steps
1. Algorithm Add(a,b,c, n,n) 0 -- 0
2. { 0 -- 0
3. for i= 1 to n do 1 1 n+1
4. for j = 1 to n do 1 n(n+1) n2+n
5. c[i, j] = a[i, j] +b[i, j]; 1 n2 n2
6. } 0 -- 0
Step Count 2n2+2n+1
int main()
{
int i,j; Polynomial equation for step count is 2n2+2n+1
printf("Matrix A:\n");
Print_mat(matA); Execution time depends on n
printf("Matrix B:\n"); Time complexity: O(n2)
Print_mat(matB);
for(i=0; i<N; i++)
for(j=0; j<N; j++)
matC[i][j]=matA[i][j]+matB[i][j];
printf("Matrix C:\n");
Print_mat(matC);
return 0;
}
LINEAR SEARCH
Let (23, 60, 5, 45, 70) is linear unordered list of elements.
Search whether value 45 is in the list or not.

Search whether value 55 is in the list or not


LINEAR SEARCH

 Best-case: Element to be search present at first position (minimum number of steps that
can be executed) O(1)

 Worst-case: Element to be search present at last position or not in the list (maximum
number of steps that can be executed) O(n)
 Average-case: Element to be search is in the list (average number of steps executed on
instances). O(n)

Statement s/e frequency Total steps #define N 5


int list[N]={7, 3, 6, 8,2};
1. Algorithm LinSrch(list, val, n) 0 -- 0 int main(){
2. { 0 -- 0 int num,pos;
printf("Eneter number to be serach:");
3. for i= 1 to n do 1 n+1 n+1(1) scanf("%d",&num);
4. If (list[i] == val) 1 n n (1) for(pos=0; pos<N; pos++){
5. return 1; 1 1 1 //printf("%d\t%d\n",num ,list[pos]);
if (num == list[pos])
6. return 0; 1 1 1 {
7. } 0 -- 0 printf("%d is at %d\n", num, pos);
return pos;
Step Count (best case) 3 }
Step Count (worst case) 2n+2 }
printf("%d is not in list\n",num);
return -1;
}

Worst-case analysis is appropriate for an algorithm whose response time is critical.


BINARY SEARCH
Let (23, 36, 45, 51, 55, 57, 61, 70, 82) is a linear ordered list of elements sorted in ascending order
Search whether value 45 is in the list or not.

Search whether value 65 is in the list or not


BINARY SEARCH

 Best-case: Element to be search present at middle position. (minimum number of steps


that can be executed). O(1)

 Worst-case: Element to be search is at extreme ends or not in the list. (maximum number
of steps that can be executed) O(log2n)
 Average-case: Element to be search is in the list (average number of steps executed on
instances).
#define N 5
int list[N]={2,4,6,8,11};
int main(){
int num,mid,lb=0,ub=N-1;
printf("Enter number to be serach:");
scanf("%d",&num);
while(lb<=ub){
//printf("num=%d\tub=%d\tlb=%d\tmid=%d\n",num, ub,lb,mid);
mid=(ub+lb)/2;
if (num == list[mid]){
printf("%d is at position %d\n", num, mid);
return mid;
}
else
if (num < list[mid])
ub=mid-1;
else
lb=mid+1;
}
printf("%d is not in list\n",num);
return -1;
}
LOGNORMAL ALGORITHM
#define n 64
int main(){
int i=1, count=0;
while(i<n){
i=i*2;
count++;
printf("n=%d\ti=%d\tcount=%d\n", n,i, count);
}
}

int main(){
int i, n=10,j,count;
for(j=1; j<=n; j++){
i=1;
count=0;
while(i<j*8){
i=i*2;
count++;
}
printf("j=%d\ti=%d\tcount=%d\n", j,j*8, count);
}
}
RUNNING TIME OF AN ALGORITHM
Let us take Multiplication of Two N-digits Numbers

9 8 2 3 9 8 8 7 6 5 4 3 2 1
X X X
1 2 1 2 4 5 1 2 3 4 5 6 7 8

1 9 6 1 1 9 9 0 7 0 1 2 3 4 5 6 8
9 8 9 5 9 2 6 1 3 5 8 0 2 4 7
4 7 9 6 5 2 5 9 2 5 9 2 6
1 1 7 6
2 3 9 8 4 3 8 2 7 1 6 0 5
3 5 0 6 1 7 2 8 4
2 9 8 5 5 1 0
2 6 2 9 6 2 9 6 3
1 7 5 3 0 8 6 4 2
8 7 6 5 4 3 6
2 1

1 0 8 2 1 5 2 0 2 2 3 7 4 6 3 8

N 2 4 8 n
Multiplication 4 16 64 n2 Quadratic Growth
Additions 3 7 15 2*n-1 Linear Growth

The running time of an algorithm typically grows with the instance size.
ORDER OF GROWTH OF ALGORITHM
To analyse the efficiency of an algorithm we are interested in analysing how the running
time increases when the input size increases.

Step count for sum of element of list: 2n+3


Step count for matrix addition: 2n2+ 2n+1
Functions in order of
For n=5, 2n+3= 13 and 2n2+ 2n+1= 61 increasing growth rate
For n= 10, 2n+3= 23 and 2n2+ 2n+1= 221 FUNCTION NAME
c Constant
n logn nlogn n2 n3 2n
1 0 0 1 1 2 logN logarithmic
2 0.30103 0.60206 4 8 4 N Linear
10 1 10 100 1000 1024
NlogN NlogN
1.27E+3
100 2 200 10000 1000000
0 N2 Quadratic
1.1E+30 N3 Cubic
1000 3 3000 1000000 1E+09
1
10000 4 40000 1E+08 1E+12 #NUM! 2N Exponential

When two algorithms are compared with respect to their behaviour (logarithmic, linear,
quadratic, cubic, exponential etc) for the large input sizes, a very useful measure is called
order of growth
GROWTH OF RUNNING TIME
1E+30
1E+28 Cubic
1E+26
1E+24 Quadratic
1E+22
Linear
1E+20
1E+18

T (n )
1E+16
1E+14
1E+12
1E+10
1E+8
1E+6
1E+4
1E+2
1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n

Complexity classes
n! 2n n3
n2
f(n)
n log n
n (linear time)

log n
n
NEED FOR NOTATIONS

Problem with step count:


1. The formulas that are derived from the step count of algorithm may often be quite
complex.

2. The main purpose of the analysis algorithm is to get a sense for the trend in the
algorithm’s running time. (An exact analysis is probably best done by implementing the
algorithm and measuring CPU seconds).

3. we need a language (notations) that will allow us to say that the computing time, as a
function of n, grows `on the order of n3,' or `at most as fast as n3,' etc

Asymptotic notations are simple way of representing complex


functions that captures the essential growth rate properties.
ASYMPTOTIC NOTATIONS

The Asymptotic notations are commonly used to characterize the complexity of an


algorithm

The principle of invariance says that


the ratio of running times of two different implementation of the same algorithm is always
bounded above and below by fixed constants.

let f(n) and g(n) as the running times of two algorithms on inputs of size n. we say that f(n)
is in the order of g(n) if f(n) is bounded above by a positive real multiple of g(n) for all
sufficiently large n.

maximum rule: The time taken by algorithm is logically sum of the time taken by its disjoint
parts but it is in the order of the time taken by its most time-consuming part

Asymptotic notation provides us with a way to simplify the functions that arise in analysing
algorithm running times by
• Ignoring constant factors and
• Concentrating on the trends for large values of n.
EXAMPLE
1000

n 2n+3 n2 n3 n4 2n
0 3 0 0 0 1
1 5 1 1 1 2
2 7 4 8 16 4
2n+3
3 9 9 27 81 8
4 11 16 64 256 16
n2
5 13 25 125 625 32
n3
6 15 36 216 1296 64
500 7 17 49 343 2401 128
n4
8 19 64 512 4096 256

2n
f(n) = O(n2) for c=1 & n0=3

f(n) = O(n3) for c=1 & n0=1

f(n) = O(n4) for c=1 & n0=1


0
f(n) = O(2n) for c=1 & n0=4
0 1 2 3 4 5 6 7 8
RECURRENCE

When algorithm contains a repetitive call to itself or some algorithm, its running time can be
describe by a recurrence or recurrence relation

A recurrence is a recursive description of an algorithm or a description of such an algorithm


in terms of itself.

A recurrence is an equation or inequality that describes a function in terms of its value on


smaller inputs.
Example: Recursive definition of factorial :
(1) 0! = 1
(2) n! = n · (n − 1)! For n >0
The recursive algorithm to compute factorial of n is
Function factorial(n)
if (n == 0)
return(1);
else
return(n*factorial(n-1))
Recurrence Relation is
(1) if n  0
T (n)  
T ( n  1)  1 if n  0
RECURRENCE- EXAMPLES
Example: Recursive definition of power:
(1) a0 = 1
(2) an = a * an−1 for n>0 (1) if n  0
T (n)  
The recurrence relation is T ( n  1)  1 if n  0
Example: nth Fibonacci number of the sequence
function Fibrec(n)
if n < 2 then return n
(1) if n  0 ,1
else return(Fibrec(n-1) + Fibrec(n-2)) T (n)  
T ( n  1)  T ( n  2) if n  2
The recurrence relation is
Example: Merge-sort:
MergeSort(A[1 .. n]):
if (n > 1)
m=n/2
MergeSort(A[1 ..m])
MErgeSort(A[m+1 .. n])
Merge(A[1 .. n],m) (1) if n  1
T (n)  
The recurrence relation is 2T ( n / 2)  ( n ) if n  1

Example: Selection- sort:


Procedure SelectionSort(A[1..n])
For i=1 to n-1
Min=select(A[i,n]
Swap(A[i], A[min])
This gives us a recurrence T(n) = Θ(n )+ T(n − 1).
PLAN FOR ANALYSIS OF RECURSIVE ALGORITHMS

• Decide on a parameter indicating an input’s size.

• Identify the algorithm’s basic operation.

• Check whether the number of times the basic operation is executed may vary on different
inputs of the same size.
(If it may, the worst, average, and best cases must be investigate separately.)

• Set up a recurrence relation with an appropriate initial condition expressing the number of
times the basic op. is executed.

• Solve the recurrence (or, at the very least, establish its solution’s order of growth)
METHODS TO SOLVE RECURRENCE

In all examples we have:


1. A Basis, where the function is explicitly evaluated for one or more values of its argument.
2. A Recursive Step, stating how to compute the function from its previous values.
(1) if n  0
T (n)  
T ( n  1)  1 if n  0

There are several techniques for solving recurrence relations.


• Substitution method (with guess and induction hypothesis)
• Recursion-tree method ( tree)
• Iteration method (expanding iterations)
• Masters method (1) if n  1
T (n)  
2T ( n / 2)  ( n ) if n  1

Types of recurrence
 Homogeneous linear
 Inhomogeneous linear
 Homogeneous non-linear
 Inhomogeneous non-linear
THE SORTING PROBLEM
Input: The sequence of n numbers (a1, a2, …….an)
Output: A permutation (reordering) (a’1, a’2,……a’n) of the input sequence such that a’1≤ a’2≤ ………≤a’n.
Selection Sort:
Logic: find the largest element in list and its position and swap it with the first element, then find second
largest element and its position and swap with the second element, and so on.
SELECTION SORT
Logic: find the largest element in list and its position and swap it with the first element, then find second
largest element and its position and swap with the second element, and so on.

14579 find the largest element & its position and swap it with first element.
94571 Total 4 Comparisons and 1 swap

94571 find the 2nd largest element & its position and swap it with 2nd element
97541 Total 3 Comparisons and 1 swap

97541 find the 3rd largest element & its position and swap it with 3rd element
97541 Total 2 Comparisons and 1 swap
97541 find the 4th largest element & its position and swap it with 4th element
97541 Total 1 Comparison and 1 swap

There are 10 comparisons and 4 swaps for sorting list


Two properties sorting algorithm:
1. Stability: A sorting algorithm is called stable if it preserves the relative order of any two equal
elements in its input.
2. In place : A sorting algorithm is in place if it does not require extra memory, except, possibly for a few
memory units.
SELECTION SORT
void selection_sort(){
int big, pos,i,j;

for(j=0; j<N-1;j++){ n
big=list[j]; n-1
pos=j; n-1
for (i=j+1; i<N; i++){ n2 (n-1)+(n-2), …..3+2+1= n(n-1)/2
if(list[i]> big){ n2
big=list[i]; -
pos=i; -
}
}
-
Comparisons≈n2
Swaps=1
-
-
//printf("big=%d\tpos=%d\n",big,pos); - Time complexity= O(n2)
list[pos]=list[j]; n-1
(Best case & worst case)
list[j]=big; n-1
}
}

Two properties sorting algorithm:


1. Stability: A sorting algorithm is called stable if it preserves the relative order of any two equal
elements in its input. (Not stable)
2. In place : A sorting algorithm is in place if it does not require extra memory, except, possibly for a few
memory units. (Yes)
SELECTION SORT
#include<stdio.h>
#define N 5
//int list[N]={1,2,3,4,5};
int list[N]={5,4,3,2,1};
void print_list(){
int i;
for(i=0; i<N; i++)
void selection_sort(){
printf("%d\t",list[i]);
int temp,pos,i,j;
printf("\n");
}
for(j=0; j<N-1;j++){
pos=j;
for (i=j+1; i<N; i++)
if(list[i]> list[pos])
pos=i;

int main(){ temp=list[pos];


printf("Unsorted list:\t"); list[pos]=list[j];
print_list(); list[j]=temp;
selection_sort(); //print_list();
//bubble_sort(); }
//insertion_sort(); }
printf("Sorted list:\t");
print_list(); Comparisons≈n2
return 0;
}
Swaps=1
Time complexity= O(n2)
(Best case & worst case)
SELECTION SORT
void selection_sort1(){
int temp,i,j;

for(j=0; j<N-1;j++){ n
for (i=j+1; i<N; i++){ n2
(n-1)+(n-2), …..3+2+1= n(n-1)/2
if(list[i]> list[j]){ n2
temp=list[j]; n2
list[j]=list[i]; n2
list[i]=temp; n2
} -
} - Comparisons≈n2
//printf("big=%d\tpos=%d\n",big,pos);
}
-
Swaps≈n2
}
-
Time complexity= O(n2)
(Best case & worst case)

Two properties sorting algorithm:


1. Stability: A sorting algorithm is called stable if it preserves the relative order of any two equal
elements in its input. (Not stable)
2. In place : A sorting algorithm is in place if it does not require extra memory, except, possibly for a few
memory units. (Yes)
THE SORTING PROBLEM
Bubble Sort:
Logic: shift or push (bubble out) the smallest element to last position, second smallest element to last but
one position and so on.

14579 Comparison, swap


41579 Comparison, swap
45179 Comparison, swap
45719 Comparison, swap
45791 Total 4 Comparisons and 4 swaps
45791 Comparison, swap
54791 Comparison, swap
57491 Comparison, swap
57941 Total 3 Comparisons and 3 swaps
57941 Comparison, swap
75941 Comparison, swap
79541 Total 2 Comparisons and 2 swaps
79541 Comparison, swap
97541 Total 1 Comparison and 1 swap
There are 10 comparisons and 10 swaps for sorting list.
THE SORTING PROBLEM
Bubble Sort:
Logic: shift or push (bubble out) the smallest element to last position, second smallest element to last but
one position and so on.

Time complexity= O(n2)


void bubble_sort(){ (Best case & worst case)
int i,j,t; n

for(i=0; i<N-1; i++){ n-1

for(j=0; j<N-i-1; j++){ n2 (n-1)+(n-2), …..3+2+1= n(n-1)/2


if(list[j]< list[j+1]){ n2 void bubble_sort1(){
t=list[j]; n2 int i,j,t,swapped;
list[j]=list[j+1]; - for(i=0; i<N-1; i++){
list[j+1]=t; - swapped=0;
} -
for(j=0; j<N-i-1; j++){
if(list[j]< list[j+1]){
} -
t=list[j];
} - list[j]=list[j+1];
} - list[j+1]=t;
swapped=1;
}

}
if (!swapped)
It is possible to have Best case Time complexity= O(n) break;
//print_list();
}
}
INSERTION SORT
Logic: Select the key and make space and place it at proper position in the
sorted list of previous elements.

14579 key is 4, Compare (1, 4), shift


_1579 Place key (4) at blank position.
41579 Total 1 Comparison and 1 shift
41579 key is 5, Compare (1, 5), shift
4_179 Compare (4, 5), shift
_4179 Place key (5) at blank position.
54179 Total 2 Comparisons and 2 shifts
54179 key is 7, Compare (1, 7), shift
54_19 Compare (4, 7), shift
5_419 Compare (5, 7), shift
_5419 Place key (7) at blank position.
75419 Total 3 Comparisons and 3 shifts
75419 key is 9, Compare (1, 9), shift
754_1 Compare (4, 9), shift There are
75_41 Compare (5, 9), shift 10 comparisons
7_541 Compare (7, 9), shift and
_7541 Place key (9) at blank position. 10 shifts
97541 Total 4 Comparisons and 4 shifts for sorting list.
INSERTION SORT
Logic: Select the key and make space and place it at proper position in the
sorted list of previous elements.
void insertion_sort(){
int i,j,t;

for(i=1; i<N; i++){ n


t=list[i]; n-1
for(j=i-1; j>=0; j--){ n2 1+2+3+…..(n-2)+(n-1)= n(n-1)/2
if(t>list[j]) n2
list[j+1]=list[j]; n2
else -
break;
}
-
Worst case Time complexity= O(n2)
(Best case Time complexity= O(n)
-
if(j < 0) -
list[0]=t; -
else -
list[j+1]=t;
-
print_list();
-
}
}
COMPARISON SORTING ALGORITHMS

Table. Comparison of insertion sort with other sorting methods


Insertion sort Selection sort Bubble sort
Parameter
Best-case Worst-case Best-case Worst-case Best-case Worst-case
No. of comparisons O(n) O(n2) O(n2) O(n2) O(n) O(n2)
No. of swaps and/or shifts 0 O(n2) O(1) O(n2) 0 O(n2)
Memory usage O(1) O(1) O(1) O(1) O(1) O(1)
recursion No No No
stability Yes No Yes
MERGE SORT
Bottom-up approach
Logic:
• Treat every individual element in a list as a separate list then there are n sub-lists in a list.
• List with single element is always sorted.
• Hence there are n sorted sub-lists in a list having 1 element in each list
• merge two adjacent sorted sub-lists to build sorted sub-list of 2 elements & then use such two sorted
sub-lists of 2 elements to build sorted sub-list of 4 elements and so on.
• The process of building larger sorted list from smaller sorted sub-lists continue till one get final sorted
list of input list
MERGE SORT
Top-down approach
Logic: divide the list to be sorted into two nearly equal sub-lists. Sort both the sub-lists and finally
combine then such a way that the final merged list becomes sorted. To start with keep on dividing list
into two smaller sub-lists till one can get list with 0 or 1 element. Null list or list of single element is
already sorted.

Algorithm: MergeSort (list, beg, end)


If (list contains 2 or more elements)
mid=(beg+end)/2
Mergesort (list, beg, mid)
Mergesort (list, mid+1, end)
Merge (list, beg, end)
QUICK SORT
Logic:
1. Pick an element, called a pivot, from the list.
2. Reorder the list so that all elements which are less than the pivot come before the pivot and all elements
greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is
set in its final position. This is called the partition operation that creates left (the sub-list of elements
with smaller values) & right (separately the sub-list of elements with greater values) sub-lists that need
to be sorted again.
3. Recursively sorts the sub-list of lesser elements (left) and the sub-list of greater elements (right).
QUICK SORT

Algorithm: Quicksort (list, beg, end)


If (list contains 2 or more elements)
p =Partition (list, beg, end)
Quicksort (list, beg, p-1)
Quicksort (list, p+1, end)

Table Comparison of Quick sort with insertion sort and Merge sort
Insertion sort Quick sort Merge sort

Parameter Best-case Worst-case Average- Worst-case Best-case Worst-case


case

No. of comparisons O(n) O(n2) O(nlg n) O(n2) O(nlg n) O(nlg n)


No. of swaps and/or shifts 0 O(n2) O(n) 0 O(nlg n) O(nlg n)
Memory usage O(1) O(1) O(lg n) O(n2) O(n) O(n)
recursion No Yes No/yes
stability Yes No Yes
Past is Experience...!
Present is Experiments...!
Future is Expectations...!

Use your Experience in your Experiments to achieve your Expectations...!!!

You might also like