0% found this document useful (0 votes)
7 views14 pages

Notes 1

The document provides an overview of algorithm analysis, focusing on time and space complexity, and the importance of understanding algorithm performance for design decisions. It discusses various complexity classes, including best, worst, and average case scenarios, and introduces the Big O notation for describing algorithm efficiency. Additionally, it covers methods for solving recurrence equations and analyzing specific algorithms like insertion sort and linear search.

Uploaded by

leemong335
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views14 pages

Notes 1

The document provides an overview of algorithm analysis, focusing on time and space complexity, and the importance of understanding algorithm performance for design decisions. It discusses various complexity classes, including best, worst, and average case scenarios, and introduces the Big O notation for describing algorithm efficiency. Additionally, it covers methods for solving recurrence equations and analyzing specific algorithms like insertion sort and linear search.

Uploaded by

leemong335
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

CS302 Design and Analysis of Algorithms

Module-I

Introduction to Algorithm AnalysisTime and Space ComplexityElementary


operations and Computation of Time ComplexityBest, worst and Average Case
Complexities- Complexity Calculation of simple algorithms Recurrence
Equations:Solution of Recurrence Equations – Iteration Method and Recursion
Tree Methods
Analysis of Algorithm
The practical goal of algorithm analysis is to predict the performance of different
algorithms in order to guide design decisions.
Algorithm analysis is an important part of computational complexity theory, which
provides theoretical estimation for the required resources of an algorithm to solve
a specific computational problem. Most algorithms are designed to work with
inputs of arbitrary length. Analysis of algorithms is the determination of the
amount of time and space resources required to execute it.
The relative performance of the algorithms might depend on
 characteristics of the hardware
 the details of the dataset
 the size of the problem
Time taken by an algorithm grows with size of input. So running time of a
program is a function of size of the input. Running time of an algorithm on a
particular input is the number of primitive operations or the steps executed.
Analysis of algorithm is the process of analyzing the problem-solving capability
of the algorithm in terms of the time and size required (the size of memory for
storage while implementation). However, the main concern of analysis of
algorithms is the required time or performance. Generally, we perform the
following types of analysis −
 Worst-case − The maximum number of steps taken on any instance of
size a.
 Best-case − The minimum number of steps taken on any instance of size a.
 Average case − An average number of steps taken on any instance of size a.
In designing of Algorithm, complexity analysis of an algorithm is an essential
aspect. Mainly, algorithmic complexity is concerned about its performance, that
describes the efficiency of the algorithm in terms of the amount of the memory
required to process the data and the processing time.
Complexity of an algorithm is analyzed in two perspectives: Time and Space.
Time Complexity
 It’s a function describing the amount of time required to run an algorithm in
terms of the size of the input. "Time" can mean the number of memory
accesses performed, the number of comparisons between integers, the
number of times some inner loop is executed, or some other natural unit
related to the amount of real time the algorithm will take.
Space Complexity

 It’s a function describing the amount of memory an algorithm takes in terms


of the size of input to the algorithm. We often speak of "extra" memory
needed, not counting the memory needed to store the input itself.

Random Access Machine model

Algorithms can be measured in a machine-independent way using the Random


Access Machine (RAM) model. This model assumes a single processor. In the
RAM model, instructions are executed one after the other, with no concurrent
operations. This model of computation is an abstraction that allows us to compare
algorithms on the basis of performance. The assumptions made in the RAM model
to accomplish this are:

 Each simple operation takes 1 time step.


 Loops and subroutines are not simple operations.
 Each memory access takes one time step, and there is no shortage of
memory.

For any given problem the running time of an algorithm is assumed to be the
number of time steps. The space used by an algorithm is assumed to be the number
of RAM memory cells.

Order of Growth

Order of growth in algorithm means how the time for computation increases when
you increase the input size. It really matters when your input size is very large.
Order of growth provides only a crude description of the behavior of a process.It
considers leading term and ignores lower order terms since it is insignificant for
large values of input. It also ignores the constant coefficient of leading terms.

Big O notation is used in Computer Science to describe the performance or


complexity of an algorithm. Big O specifically describes the worst-case scenario,
and can be used to describe the execution time required or the space used (e.g. in
memory or on disk) by an algorithm.

Order of Growth Big O notation


Constant O(1)
Logarithmic O(log n )
Linear O(n)
Loglinear O(n log n )
Quadratic O(n2)
Cubic O(n3)
Exponential O(2n), O(10n)

According to their orders of growth, the big O notations can be arranged in an


increasing order as:-
O(1) < O(log n) < O(n) < O(n log n) < O(n2) < O(n3) < O (2n) < O (10n)
Analysis of algorithm

1) O(1): Time complexity of a function (or set of statements) is considered as O(1)


if it doesn’t contain loop, recursion and call to any other non-constant time
function.

// set of non-recursive and non-loop statements

A loop or recursion that runs a constant number of times is also considered as


O(1). For example the following loop is O(1).

// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}

2) O(n): Time Complexity of a loop is considered as O(n) if the loop variables is


incremented / decremented by a constant amount. For example following functions
have O(n) time complexity.

// Here c is a positive integer constant


for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}

3) O(nc): Time complexity of nested loops is equal to the number of times the
innermost statement is executed. For example the following sample loops have
O(n2) time complexity

for (int i = 1; i <=n; i += c) {


for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
For example Selection sort and Insertion Sort have O(n2) time complexity.

4) O(Logn) Time Complexity of a loop is considered as O(Logn) if the loop


variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}

For example Binary Search has O(Logn) time complexity.

5) O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the


loop variables is reduced / increased exponentially by a constant amount.

// Here c is a constant greater than 1


for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 0; i = fun(i)) {
// some O(1) expressions
}

Time complexities of consecutive loops


When there are consecutive loops, we calculate time complexity as sum of time
complexities of individual loops.
for (int i = 1; i <=m; i += c) {
// some O(1) expressions
}
for (int i = 1; i <=n; i += c) {
// some O(1) expressions
}
Time complexity of above code is O(m) + O(n) which is O(m+n)
If m == n, the time complexity becomes O(2n) which is O(n).

One example of time complexity analysis

int fun(int n)
{
for (int i = 1; i <= n; i++)
{
for (int j = 1; j < n; j += i)
{
// Some O(1) task
}
}
}

Analysis:

For i = 1, the inner loop is executed n times.


For i = 2, the inner loop is executed approximately n/2 times.
For i = 3, the inner loop is executed approximately n/3 times.
For i = 4, the inner loop is executed approximately n/4
times.……………………………………….
For i = n, the inner loop is executed approximately n/n times.
So the total time complexity of the above algorithm is (n + n/2 + n/3 + … + n/n),
Which becomes n * (1/1 + 1/2 + 1/3 + … + 1/n)
The important thing about series (1/1 + 1/2 + 1/3 + … + 1/n) is equal to O(Logn).
So the time complexity of the above code is O(nLogn).

Best, worst and Average Case Complexities


We can have three cases to analyze an algorithm :
1)Worst Case – running time of an algorithm is the function defined by maximum
number of steps taken on any instance of size n.
2) Average Case - running time of an algorithm is the function defined by an
average number of steps taken on any instance of size n.
3) Best Case- running time of an algorithm is the function defined by minimum
number of steps taken on any instance of size n.
Let us consider the following implementation of Linear Search.
Function SeqSearch (a: array of element, n:number of element, x: the number that
look for it)
Begin
i=n
a[0]=k
while ( a(i) < > x )
do i=i-1
end while return i
end
}

Solution: In this example, the search process begins from the end of a. The
successful Searches:
The Best case time complexities: When x number found in the position a[n].
Therefore, the time complexities in the best case will be:
TBSeqSearch(n)=4=Θ(1)
The Worst case time complexities: when x found in the first position a[1], So:
TWSeqSearch(n)= 2+2n = Θ(n)
The Average case time complexities: It is the average of complexities for all cases:

The Failed Searches :When x doesn't exist in the array: -

TFSeqSearch(n)=n+1= Θ(n)

Insertion sort

Insertion sort works just like how you arrange a pack of playing cards in your
hand.
1.Start with an empty hand with a pack of cards(faced down) on a table.
2.Pick one card at a time from the table and insert it in the correct position in your
hand.
3.To find the correct position of the card, compare it with each card in the hand
from right to left.
(Notice that cards in the hand are already sorted)
Algorithmically :
Insertion Sort(A)

1. for i = 2 to length(A) // Start with the 2nd element because the first element
is trivially sorted
2. x=A[i] // x is the element you want to insert in right place into the already
sorted set of elements
3. j=i-1 // Last index of the already sorted elements because thats where
you want to start comparing x
4. while j>0 and A[j]>x // Whenever there is an element greater than x
5. A[j+1]=A[j] // shift it to the right
6. j = j-1
7. end while
8. A[j+1]=x // The correct position to insert x
9. end for

Analyzing the time complexity :


Best case :
1. Outer for loop runs for (n-1) times
4. Inner while loop exits the loop in the very first comparison // Eg - Sorted set : 2
4 5 7, Element to insert : 9
1+1+1+...+(n-1) times => Ω(n)

Average case :
1. Outer for loop runs for (n-1) times
4. On average the inner while loop runs for (n-2)/2 times // Eg - Sorted set : 2
4 10 12, Element to insert : 9
1/2+2/2+3/2+4/2+....(n-1)/2 = (1+2+3+...+(n-1))/2
= ((n-1)*n)/4 => Θ(n2)
Worst case :
1. Outer for loop runs for (n-1) times
4. In worst case the inner while loop runs for (n-2) times // Eg - Sorted set
: 10 11 12 13, Element to insert : 9
1+2+3+...+(n-1) = ((n-1)*n)/2 => O(n2)

Maximum element in one dimension array:

Function ArrayMax(a: array of element ,n: number of element) Begin


1. max=a(1)
2. For i=2 to n
3. If max< a(i) then
4. max=a(i)
5. end if
6. end for
7. return max
end
The instance characteristics of this problem is n

Solution:
The successful cases:
The Best case time complexities: When a(1) is the max number. That mean, there is
no need to entered in if block (if condition is never be true):
1.…………1
2.…………(n-2+1)+1=n
3.…………n-1
4.…………0
7……………1
TBArrayMax(n)=2n+1=Θ(n)
The Worst case time complexities: when array a is sorted in increasing form, max
element is a(n), So:
1.…………1
2.…………(n-2+1)+1=n
3.…………n-1
4.…………n-1
7……………1
TWArrayMax(n)= 3n = Θ(n)
The Average case time complexities: It is the average of complexities for all cases
from the best to the worst one:

Recurrence Equations

Many algorithms, particularly divide and conquer algorithms have time


complexities which are naturally modeled by recurrence relations. A recurrence
relation is an equation which is defined in terms of itself.

Recurrence equation have a general condition, breaking the problem into smaller
and smaller pieces, and the base condition (initial or boundary condition) that
terminate the recursion.
Solution of Recurrence Equations

There are three methods by which we can solve recurrence equation:

 Iteration method
 Recursion tree method
 Master’s theorem

Iteration Method

In the iteration method we iteratively “unfold” the recurrence until we “see the
pattern”. The idea is to expand (iterate) the recurrence and express it as a
summation of terms dependent only on n and the initial conditions. Techniques for
evaluating summations can then be used to provide bounds on the solution. The
iteration method does not require making a good guess like the substitution method
(but it is often more involved than using induction).
Recursion Tree Method

A recursion tree is a tree where each node represents the cost of a certain recursive
subproblem. In this method, we draw a recurrence tree and calculate the time taken
by every level of tree. Finally, we sum the work done at all levels. To draw the
recurrence tree, we start from the given recurrence and keep drawing till we find a
pattern among levels. The pattern is typically a arithmetic or geometric series.

You might also like