0% found this document useful (0 votes)
258 views22 pages

DAA PPT-1worst Case and Average Case Analysis, Asymptotic Notations

The document provides information about algorithms and their analysis. It discusses key concepts like asymptotic analysis using Big-O, Omega, and Theta notations to determine the time complexity of algorithms. Common complexities like constant, logarithmic, linear, quadratic, and exponential are described. Algorithm design techniques such as divide-and-conquer, greedy, dynamic programming, branch-and-bound, backtracking, and randomized algorithms are covered. Examples of applying asymptotic notations to functions are provided to calculate upper bounds, lower bounds, and tight bounds.

Uploaded by

sudhanshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
258 views22 pages

DAA PPT-1worst Case and Average Case Analysis, Asymptotic Notations

The document provides information about algorithms and their analysis. It discusses key concepts like asymptotic analysis using Big-O, Omega, and Theta notations to determine the time complexity of algorithms. Common complexities like constant, logarithmic, linear, quadratic, and exponential are described. Algorithm design techniques such as divide-and-conquer, greedy, dynamic programming, branch-and-bound, backtracking, and randomized algorithms are covered. Examples of applying asymptotic notations to functions are provided to calculate upper bounds, lower bounds, and tight bounds.

Uploaded by

sudhanshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Unit-1

DAA
Unit-1 Syllabus
• Mathematical foundations, summation of
arithmetic and geometric series, Σn, Σn2,
bound summations using integration, Analysis
of algorithms, analyzing control structures,
worst case and average case analysis,
Asymptotic notations, Analysis of sorting
algorithms such as selection sort, insertion
sort, bubble sort, heap sort, external Sorting,
lower bound proof.
Course Outcomes
After completion of the course, student will be able to:
CO1 : Remember the concepts of algorithms.
CO2 : Understand time requirements of an algorithm and
mathematical techniques used in analysis of algorithms.
CO3 : Analyze the Complexities of different algorithms
for a wide variety of foundational problems occurring in
computer science applications.
CO4 : Apply the knowledge of different algorithms with
discussions on complexity.
CO5 : Evaluate the knowledge of algorithms with
Complexity and NP-completeness.
Index
• Algorithm
• Characteristics of Algorithms
• Analysis of algorithm
• Asymptotic Analysis
• How to approximate the time taken by the
Algorithm?
• Typical Complexities of an Algorithm
• Example of Asymptotic Notation
What is An Algorithm
• A finite set of instruction that specifies a
sequence of operation is to be carried out in
order to solve a specific problem or class of
problems is called an Algorithm.
• An algorithm is a procedure used for solving a
problem or performing a computation.
Algorithms act as an exact list of instructions
that conduct specified actions step by step in
either hardware- or software-based routines.
How do algorithms work?

• Algorithms can be expressed as natural


languages, programming languages,
pseudocode, flowcharts and control tables.
Natural language expressions are rare, as they
are more ambiguous. Programming languages
are normally used for expressing algorithms
executed by a computer.
Characteristics of Algorithms
• Input: It should externally supply zero or more quantities.
• Output: It results in at least one quantity.
• Definiteness: Each instruction should be clear and ambiguous.
• Finiteness: An algorithm should terminate after executing a
finite number of steps.
• Effectiveness: Every instruction should be fundamental to be
carried out, in principle, by a person using only pen and paper.
Should be feasible with the available resources.
• Unambiguous − Algorithm should be clear and unambiguous.
Each of its steps (or phases), and their inputs/outputs should
be clear and must lead to only one meaning.
• Independent − An algorithm should have step-by-step
directions, which should be independent of any programming
code.
Analysis of algorithm
The analysis is a process of estimating the efficiency
of an algorithm. There are two fundamental
parameters based on which we can analysis the
algorithm:
• Space Complexity: The space complexity can be
understood as the amount of space required by
an algorithm to run to completion.
• Time Complexity: Time complexity is a function
of input size n that refers to the amount of time
needed by an algorithm to run to completion.

Asymptotic Analysis
• Big O Notation: The Big O notation defines an
upper bound of an algorithm, it bounds a
function only from above. To calculate Worst case
time Complexity. Mathematically, if f(n) describes
the running time of an algorithm; f(n) is O(g(n)) if
and only if there exist positive constants c and n°
such that:
• f(n)=O(g(n)
If f(n)<=c.g(n)
where c is constant
• Here, n is the input size, and g(n) is any
complexity function, for, e.g. n, n2, etc. (It is used
to give upper bound on a function)
Asymptotic Analysis
• Ω Notation: Just as Big O notation provides an
asymptotic upper bound on a function, Ω notation
provides an asymptotic lower bound. Let f(n) define
the running time of an algorithm; f(n) is said to be Ω
(g(n)) if and only if there exist positive constants c and
n° such that:

• To calculate Best case time Complexity.


• f(n)= Ω(g(n)
If f(n)>=c.g(n) where c is constant
Asymptotic Analysis
• Θ Notation: The theta notation bounds a functions
from above and below, so it defines exact asymptotic
behaviour. It uses Tight bound To calculate Average
case time Complexity. F(n) is said to be θ (g(n)) if f(n) is
O (g(n)) and f(x) is Ω (g(n)) both.
• Mathematically,

• f(n)= Θ(g(n)
If c1(g(n))<=f(n)<=c2.(g(n)) where c is constant
How to approximate the time taken
by the Algorithm?
There are two types of algorithms:
• Iterative Algorithm: In the iterative approach, the function
repeatedly runs until the condition is met or it fails. It
involves the looping construct.
• Example
A()
{
int i;
for (i=1 to n)
printf("Edward");
}
Time complexity O(n)
How to approximate the time taken by the
Algorithm?
• Recursive Algorithm: In the recursive approach, the
function calls itself until the condition is met. It
integrates the branching structure.
• Example1:
A(n)
{
if (n>1)
return (A(n-1))
}
• Time complexity:T(n) = 1 + T(n-1)
Typical Complexities of an Algorithm
• Constant Complexity:
It imposes a complexity of O(1). It undergoes an execution of
a constant number of steps like 1, 5, 10, etc. for solving a
given problem. The count of operations is independent of the
input data size.
• Logarithmic Complexity:
It imposes a complexity of O(log(N)). It undergoes the
execution of the order of log(N) steps. To perform operations
on N elements, it often takes the logarithmic base as 2.
• Linear Complexity:
– It imposes a complexity of O(N). It encompasses the same number of
steps as that of the total number of elements to implement an
operation on N elements.
It also imposes a run time of O(n*log(n)).
• Quadratic Complexity: It imposes a complexity of
O(n2). For N input data size, it undergoes the order of
N2 count of operations on N number of elements
• Cubic Complexity: It imposes a complexity of O(n3).
For N input data size, it executes the order of N3
steps on N elements to solve a given problem.
• Exponential Complexity: It imposes a complexity of
O(2n), O(N!), O(nk), …. For N elements, it will execute
the order of count of operations that is exponentially
dependable on the input data size.
Algorithm Design Techniques
The following is a list of several popular design approaches:
1. Divide and Conquer Approach: It is a top-down approach. The algorithms which
follow the divide & conquer techniques involve three steps:
• Divide the original problem into a set of subproblems.
• Solve every subproblem individually, recursively.
• Combine the solution of the subproblems (top level) into a solution of the whole
original problem.
2. Greedy Technique: Greedy method is used to solve the optimization problem. An
optimization problem is one in which we are given a set of input values, which are
required either to be maximized or minimized (known as objective), i.e. some
constraints or conditions.
• Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
• The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.
3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve
all possible small problems and then combine them to obtain solutions for bigger
problems. This is particularly helpful when the number of copying sub problems is
exponentially large. Dynamic Programming is frequently related to Optimization
Problems.
Algorithm Design Techniques
4. Branch and Bound: In Branch & Bound algorithm a given
subproblem, which cannot be bounded, has to be divided into at least
two new restricted subproblems. Branch and Bound algorithm are
methods for global optimization in non-convex problems. Branch and
Bound algorithms can be slow, however in the worst case they require
effort that grows exponentially with problem size, but in some cases
we are lucky, and the method coverage with much less effort.
5. Randomized Algorithms: A randomized algorithm is defined as an
algorithm that is allowed to access a source of independent, unbiased
random bits, and it is then allowed to use these random bits to
influence its computation.
6. Backtracking Algorithm: Backtracking Algorithm tries each
possibility until they find the right one. It is a depth-first search of the
set of possible solution. During the search, if an alternative doesn't
work, then backtrack to the choice point, the place which presented
different alternatives, and tries the next alternative.
7. Randomized Algorithm: A randomized algorithm uses a random
number at least once during the computation make a decision.
Example of Asymptotic Notation
• Find the upper bound ,lower bound and tight bound range for
the following function
• Find the Big O notation and Omega Notation for the following
function
1)2n+5 2)3n+2 3)3n+3
4)5n+8
5)10n2+7 6)2n2+5 7)3n2+4n+6
8)20n2 +8n+2 9)6n2+6n+2n
10)4n2+2n+3 11)5n2+n2+6n+2
12)52n+n2
1)2n+5
Consider f(n)=2n+5
g(n)=n
2n+5
LB TB UB
2n 2n 3n (always +1)
1) Big (O) Notation)
f(n)<=c.g(n)
Where c=3 (because of big notation is upper bound)
2n+5<=3.n
For n=1 2n+5 <3n = 2.1+5<3.1 = 7<=3 false
n=2 2n+5 <3n = 2.2+5<3.2 = 9<=6 false
n=3 2n+5 <3n = 2.3+5<3.3 = 11<=9 false
n=4 2n+5 <3n = 2.4+5<3.4 = 13<=12 false
n=5 2n+5 <3n = 2.5+5<3.5 = 15<=15 true
Consider f(n)=O(g(n))
2n+5=O(n) for all n>=5 c=3
2)Omega Notation(Ω )
f(n)>=c.g(n)
Where c=3 (because of omega notation is lower bound)
2n+5>=2.n
For n=1 2n+5 >=2n = 2.1+5>=2.1 = 7>=2 true
Consider f(n)=Ω (g(n))
2n+5=Ω (n) for all n>=1 c=2

3)Theta Notation(Θ )
c1.g(n)<=f(n)<=c2.g(n)
Where c1=2 c2=3 (because of theta notation using lower bound and upper
bound constant value))
2.n<= 2n+5<=3.n
For n=1 2n<=2n+5<=3n = 2.1<2.1+5<=3.1 =2<=7<=3 false
n=2 2n<=2n+5<=3n = 2.2<2.2+5<=3.2 =4<=9<=6 false
n=3 2n<=2n+5<=3n = 2.3<2.3+5<=3.3 =6<=11<=9 false
n=4 2n<=2n+5<=3n = 2.4<2.4+5<=3.4 =8<=13<=12 false
n=5 2n<=2n+5<=3n = 2.5<2.5+5<=3.5 =10<=15<=15 true

Consider f(n)= Θ (g(n))


2n+5= Θ (n) for all n>=5 c1=2 c2=3
2)52n+n2
Consider f(n)=52n+n2
g(n)=2n
52n+n2

LB TB UB
52n 52n 62n (always +1)
1) Big (O) Notation)
f(n)<=c.g(n)
Where c=6 (because of big notation is upper bound)
52n+n2<= 62n
For n=1 52n+n2<= 62n = 52.1+12<= 62.1 = 26<=36 true
Consider f(n)=O(g(n))
52n+n2 =O(2n) for all n>=1 c=6
2)Omega Notation(Ω )
f(n)>=c.g(n)
Where c= 5(because of omega notation is lower bound)
52n+n2<= 52n
For n=1 52n+n2>= 52n = 52.1+12>= 52.1 = 26>=25 true
Consider f(n)=Ω (g(n))
52n+n2 =Ω (2n) for all n>=1 c=5

3)Theta Notation(Θ )
c1.g(n)<=f(n)<=c2.g(n)
Where c1=2 c2=3 (because of theta notation using lower bound
and upper bound constant value))
52n <=52n+n2<= 62n
For n=1 52n <=52n+n2<= 62n = 52.1 <=52.1+12<= 62.1 =25<=26<=36 true

Consider f(n)= Θ (g(n))


52n+n2 = Θ (n) for all n>=1 c1=5 c2=6

You might also like