1.1.Algorithm
1.1.Algorithm
1. Algorithm:
Basically algorithm is a finite set of instructions that can be used to perform certain task.
The algorithm is defined as the collection of unambiguous instructions occurring in some
specific sequence and such an algorithm should produce output for given set of input in finite
amount of time.
1.2.1 Partial Correctness: The partial correctness of the algorithm defines that for every legal
input when an algorithm terminates, the result produced will be valid. An algorithm is said to be
“partial correctness” because it does not halt (terminate) in some condition.
1.2.2 Total Correctness: The total correctness of an algorithm defines that for every legal
input, the algorithm halts and then output produced will be valid.
There are many algorithms that can solve a given problem. They will have different
characteristics that will determine how efficiently each will operate. When we analyze an
algorithm, we first have to show that the algorithm does properly solve the problem
because if it doesn’t, its efficiency is not important.
1|Page
Shashank Yadav(IT-dept KIET)
Analyzing an algorithm determines the amount of “time” that algorithm takes to execute.
This is not really a number of seconds or any other clock measurement but rather an
approximation of the number of operations that an algorithm performs. The number of
operations is related to the execution time, so we will sometimes use the word time to
describe an algorithm’s computational complexity.
The analysis will determine an equation that relates the number of operations that a
particular algorithm does to the size of the input. We can then compare two algorithms by
comparing the rate at which their equations grow.
1.4.1 Space Complexity: The space complexity of an algorithm is the amount of memory it
needs to run to completion.
1.4.2 Time Complexity: The time complexity of an algorithm is the amount of computer time
it needs to run to completion. Time complexity of algorithm is calculated in three ways.
i. Best Case Time Complexity
ii. Worst Case Time Complexity
iii. Average Case Time Complexity
1.4.2.1 Best Case Time Complexity: The Best-Case time complexity of an algorithm is the
minimum amount of computer time it needs to run to completion.
1.4.2.2 Worst Case Time Complexity: The Worst-Case time complexity of an algorithm is the
maximum amount of computer time it needs to run to completion.
1.4.2.3 Average Case Time Complexity: The Average-Case time complexity of an algorithm is
the average amount of computer time it needs to run to completion.
1. Incremental Approach
2. Divide and Conquer Approach
1.5.1 Incremental Approach: It is a simple design approach of algorithm. In this design, we use
conditional statements, loop statements for solving the problem. It is a non-recursive approach.
Ex. Insertion Sort.
1.5.1.1 Loop Invariant: A loop invariant is a condition that is necessarily true immediately
before and immediately after each iteration of a loop. Loop invariant is used to help us
understand why an algorithm is correct. There are three things about a loop invariant.
2|Page
Shashank Yadav(IT-dept KIET)
Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration.
Termination: When the loop terminates, the invariant gives us a useful property that helps show
that the algorithm is correct.
INSERTION-SORT (A)
where A is an array of n numbers.
A constant amount of time is required to execute each line of our pseudo-code. One line
may take a different amount of time than another line, but we shall assume that each
execution of the ith line takes time ci, where ci is a constant.
6. c6
∑
7. c7
∑
8. c8 n-1
Running time of insertion sort: The running time of the algorithm is the sum of running times
for each statement executed.
3|Page
Shashank Yadav(IT-dept KIET)
Best Case:
In best case, the array is already sorted. For each j=2, 3. . . n, we then find A[i] <=key in line 5
when i has its initial value of j-1. Thus tj=1 for j=2, 3. . . n and best case running time is
This running time can be expressed as (an + b) for constants a and b. It is a linear function of n.
T(n) = O (n)
Worst Case:
In worst case, the array is reverse sorted. We must compare each element A[j] with each element
in the entire sorted sub-array A[1…j-1], so tj= j for j= 2, 3, …, n.
This worst-case running time can be expressed as an2 + bn + c for constants a, b, and c. This is a
quadratic function.
T(n) = O(n2)
4|Page
Shashank Yadav(IT-dept KIET)
T(n) is the running time on a problem of size n. If the problem size is small (i.e. n<=c) for some
constant c, the straightforward solution takes constant time, which we write θ(1). Suppose our
problem is divided into a sub-problems each of which is 1/b the size of the original. If we take
D(n) time to divide the problem into sub-problems and C(n) time to combine the solutions to the
sub-problems into the solution to the original problem, we get the recurrence-
{
( )
MERGE-SORT(A, p, r )
1. if p < r
2. then q ←(p + r)/2 //Divide
3. MERGE-SORT(A, p, q) // Conquer
4. MERGE-SORT(A, q + 1, r ) //Conquer
5. MERGE(A, p, q, r ) // Combine
MERGE(A, p, q, r )
1. n1 ← q − p + 1
2. n2 ←r − q
3. create arrays L[1 . . n1 + 1] and R[1 . . n2 + 1]
4. for i ← 1 to n1
5. do L[i ] ← A[p + i − 1]
6. for j ← 1 to n2
7. do R[ j ] ← A[q + j ]
8. L[n1 + 1]←∞
9. R[n2 + 1]←∞
10. i ← 1
11. j ← 1
12. for k ← p to r
13. do if L[i ] ≤ R[ j ]
14. then A[k] ← L[i ]
15. i ←i + 1
16. else A[k] ← R[ j ]
17. j←j+1
5|Page
Shashank Yadav(IT-dept KIET)
{
( )
{
( )
Therefore,
T(n)= θ(nlogba lg n)
T(n)= θ(n lg n)
2. Growth of Function:
The order of growth of the running time of an algorithm gives a simple characterization of the
algorithm's efficiency and also allows us to compare the relative performance of alternative
algorithms.
lg n n n lg n n2 n3 2n
g(n)
N
5 3 5 15 25 125 32
10 4 10 40 100 103 103
100 7 100 700 104 106 1030
1000 10 103 104 106 109 10300
6|Page
Shashank Yadav(IT-dept KIET)
2.1 Asymptotic Notation:
The notations we use to describe the asymptotic running time of an algorithm are defined in
terms of functions whose domains are the set of natural numbers N = {0, 1, 2, ...}.
For a given function g(n), we denote by O(g(n)) (pronounced "big-oh of g of n" or sometimes
just "oh of g of n") the set of functions
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}.
6n2 + 5n +4 ≤ c n2
6 + 5/n + 4/n2 ≤ c
For n= 1, c= 6 +5 +4 = 15
n=2, c= 6+2.5+1= 9.5
n=3, c= 6+1.66+ 0.44= 8.1
That means if we increase the value of n, the value c will decrease. The maximum value of c is15
for n=1.
f(n)= O(n2)
7|Page
Shashank Yadav(IT-dept KIET)
For a given function g(n), we denote by θ(g(n)) the set of functions
θ(g(n)) = {f(n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for
all n ≥ n0}.
c1 n2 ≤ 6n2 + 5n +4 ≤ c2 n2
c1 ≤ 6 + 5/n + 4/n2 ≤ c2
For n= 1, c= 6 +5 +4 = 15
n=2, c= 6+2.5+1= 9.5
n=3, c= 6+1.66+ 0.44= 8.1
That means if we increase the value of n, the value c will decrease. The maximum value of c is15
for n=1.
f(n)= θ(n2)
Ω (g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n)for all n ≥ n0}.
8|Page
Shashank Yadav(IT-dept KIET)
2.2 Asymptotic notation in equation and inequalities.
o-notation:
The asymptotic upper bound provided by O-notation may or may not be asymptotically tight.
The bound 2n2= O(n2) is asymptotically tight, but the bound 2n=O(n2) is not. We use o-notation
to denote an upper bound that is not asymptotically tight. We formally define o(g(n)) (“little-oh
of g of n”).
o(g(n)) = {f(n): for any positive constant c>0, there exists a constant n0 >0 such that
0 ≤ f(n) < cg(n) for all n ≥ n0}.
ω Notation:
ω (g(n)) = {f(n): for any positive constant c>0, there exists a constant n0 >0 such that
0 ≤ cg(n) < f(n) for all n ≥ n0}.
E.g. n2/2 = ω(n), but n2/2 ≠ ω(n2). The relation f(n)= ω(g(n)) implies that
If this limit exists, f(n) becomes arbitrarily large relative to g(n) as n approaches infinity.
9|Page
Shashank Yadav(IT-dept KIET)