0% found this document useful (0 votes)
36 views88 pages

DAA Lecture1

Uploaded by

Sparsh Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views88 pages

DAA Lecture1

Uploaded by

Sparsh Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

DESIGN AND ANALYSIS OF

ALGORITHM

Prepared by
Murari Kumar Singh
Assistant Professor
CSE SET, Greater Noida
Why study algorithms?
Their impact is broad and far-reaching.

• Internet: Web search, packet routing, distributed file sharing, ...

• Biology/Medical: Human genome project, protein


folding, ...
Why study algorithms?

• Computers: Circuit layout, file system, compilers, ...

• Computer graphics: Movies, video games, virtual reality, ...

• Security: Cell phones, e-commerce, voting machines, ...

• Multimedia: MP3, JPG, DivX, HDTV, face recognition, ...

• Social networks: Recommendations, news feeds, advertisements, ...

• Physics: N-body simulation, particle collision simulation, ...


Why study algorithms?
• To solve problems that could not otherwise be addressed.
Why study algorithms?

• Algorithms help us to understand scalability

• Performance draws the line between what is feasible and what


is impossible.

• Algorithmic mathematics provides a language for talking about


program behavior.

• Speed is fun!
Why study algorithms?
• For intellectual stimulation.

“For me, great algorithms are the poetry of computation. Just like verse, they can be
terse, allusive, dense, and even mysterious.
But once unlocked, they cast a brilliant new light on some aspect of computing. ” —
Francis Sullivan

“An algorithm must be seen to be believed. ” — Donald Knuth


Why study algorithms?
• To become a proficient programmer.

“ I will, in fact, claim that the difference between a bad programmer and a
good one is whether he considers his code or his data structures more
important. Bad programmers worry about the code. Good programmers
worry about data structures and their relationships. ”
— Linus Torvalds (creator of Linux)

“Algorithms + Data Structures = Programs. ” — Niklaus Wirth


Why study algorithms?
• They may unlock the secrets of life and of the universe.

• Computational models are replacing math models in scientific


inquiry.

21st century science


20th century science
(algorithm based)
(formula based)

Algorithms: a common language for nature, human, and computer. ” — Avi Wigderson
Why study algorithms?

• For fun and profit.


What is Algorithm?

Algorithm
• is any well-defined computational procedure that takes some value, or set of values,
as input and produces some value, or set of values, as output.

• is thus a sequence of computational steps that transform the input into the
output.

Set of values, Algorithm Set of values,


as input as output
What is Algorithm?

• is a tool for solving a well - specified computational problem.

• Any special method of solving a certain kind of problem


What is a program?

• A program is the expression of an algorithm in a programming language


• a set of instructions which the computer will follow to solve a problem
Important Features of Algorithm:

• Finiteness: algorithm should end in finite amount of steps.


• Definiteness: each instruction should be clear
• Input: valid input clearly specified
• Output: single/multiple valid output
• Effectiveness: steps are sufficiently simple and basic
RAM model
• has one processor
• executes one instruction at a time
• each instruction takes "unit time“
• has fixed-size operands, and
• has fixed size storage (RAM and disk).
Types of Algorithms

PROBABILISITIC ALGORITHM

•Chosen values arePROBABILISITIC


used in such ALGORITHM
a way that the probability of chosen
each value is known and controlled.
e.g. Randomize Quick Sort
Types of Algorithms

HEURISTIC ALGORITHM

This type of algorithm is based largely on optimism and often with


minimal theoretical support. Here error can not be controlled but
may be estimated how large it is.
Types of Algorithms

APPROXIMATE ALGORITHM

• It specifies the error we are willing to accept.

• For example, two figures accuracy or 8 figures or whatever is


required.
Answer is obtained that is as précised as required in decimal
notation.
Which algorithm is better?

The algorithms are correct, but which is the best?


Which algorithm is better?

The algorithms are correct, but which is the best?

• Measure the running time (number of operations needed).

• Measure the amount of memory used.


Running Time

• Most algorithms transform input objects into output objects.

• The running time of an algorithm typically grows with the input size.

• Average case time is often difficult to determine.

• We focus on the worst case running time.


Easier to analyze
Crucial to applications such as games, finance and robotics
Experimental Approach Studies of running time
• Write a program implementing the algorithm 9000

8000

7000
• Run the program with inputs of varying size and
6000
composition

Time (ms)
5000
• Use a method like System.currentTimeMillis() to 4000
get an accurate measure of the actual running
3000
time
2000
• Plot the results 1000

0
0 50 100
Input Size

21
Limitations of Experimental approach

• It is necessary to implement the algorithm, which may be difficult


• Results may not be indicative of the running time on other
inputs not included in the experiment.
• In order to compare two algorithms, the same hardware and software
environments must be used
Theoretical Analysis

• Uses a high-level description of the algorithm instead of an


implementation

• Characterizes running time as a function of the input size, n.


• Takes into account all possible inputs

• Allows us to evaluate the speed of an algorithm independent of the


hardware/software environment

23
A problem we all know how to solve:

Integer Multiplication

13 132 1234
x x x
43 432 4321
39
52
559
A problem we all know how to solve:

Integer Multiplication

12345 ………. 12345….n


x x
43215 4321…..n
• How would you solve this problem?
• How long would it take you?

• About 𝑛2 one-digit operations


12345….n
x • At most 𝑛2 multiplications,
4321…..n • and then at most 𝑛2 additions (for carries)
• and then I have to add n different 2n-digit
numbers…

• And I take 1 second to multiply two one-digit numbers


and .6 seconds to add, so…
DESIGN AND ANALYSIS OF
ALGORITHM
Asymptotic
Notation

Prepared by
Murari Kumar Singh
Asymptotic Notation
• Mathematical tool that allow us to analyse an algorithm's running time by
identifying its behaviour as the input size for the algorithm increases.

• This is also known as an algorithm's growth rate.

Asymptotic Notation gives us the ability to answer these questions.


• Does the algorithm suddenly become incredibly slow when the input size
grows?
• Does it mostly maintain its quick run time as the input size increases?
Types of Asymptotic Notation
These notations describe different rate-of-growth and
relations between the defining function and the
defined set of functions.
O, W, Q, o, w
Little-omega
Big-oh(O)
Theta
Big-Omega Little-oh(o)
O-notation(Big-oh)

• Commonly written as O (Big-Oh)


• An Asymptotic Notation for the worst case, or ceiling of growth
for a given function.

• It provides us with an asymptotic upper bound for the growth


rate of the runtime of an algorithm.
• Say f(n) is your algorithm runtime, and g(n) is an arbitrary
time complexity you are trying to relate to your algorithm.

• f(n) is O(g(n)), if for some real constants c (c > 0) and n0, cg(n)

Rate
Growth
• f(n) <= c g(n) for every input size n (n > n0). f(n)

{f(n) = O(g(n)):
 positive constants c and n0, such
that n  n0, 0 n0
Input n
we have 0  f(n)  cg(n) }
{f(n) = O(g(n)):
 positive constants c and n0, such
that n  n0,
we have 0  f(n)  cg(n) }

• Note that the definition requires a constant c to exist that works


for all n; c cannot depend on n.
How to find Big-Oh for given function

Find Big-Oh of: For all n>=6


f(n)= 𝟐𝒏𝟐 + 𝟑𝒏 + 𝟔 ≤ 𝟐𝒏𝟐 + 𝟑𝒏 + 𝒏

1) 𝑓 𝑛 = 2𝑛2 + 3𝑛 + 6 ≤ 𝟐𝒏𝟐 + 𝟒𝒏

Definition of Big-oh is f(n)<=cg(n); ≤ 𝟐𝒏𝟐 + 𝒏𝟐 ; for all n≥5


Find some positive constant c and n0 f(n) ≤ 𝟑𝒏𝟐
n 4n 𝒏𝟐
1 4 1 f(n)=O(g(n))
2 8 4 Where c=3 and n0=5
3 12 9
4 16 16
5 20 25
6 24 36
Ω,Lower bound
f(n) is big omega of g(n), f(n) = Ω(g(n)), if for sufficiently large n, the function f(n)
is bounded from below by a constant multiple of g(n).
f(n)
Rate
Growth

cg(n)

f(n) ≥ c g(n); v n≥n0


0 n0
Input n
• Say f(n) is your algorithm runtime, and g(n) is an arbitrary
time complexity you are trying to relate to your algorithm.

• f(n) is Ω(g(n)), if for some real constants c (c > 0) and n0,

• f(n) ≥ c g(n) for every input size n (n > n0).


Relation between O and Ω
{f(n) = Ω(g(n)):
 positive constants c and n0, such f(n) =Ω (g(n))
that n  n0, if and only if
we have 0  f(n) ≥ cg(n) } g(n) = O(f (n)).
How to find Big-Omega for given function

Find Big-Omega(Ω) of: For all n>=6


f(n)= 𝟐𝒏𝟐 + 𝟑𝒏 + 𝟔 ≥ 𝟐𝒏𝟐 + 𝟑𝒏

1) 𝑓 𝑛 = 2𝑛2 + 3𝑛 + 6 f(n) ≥ 𝟐𝒏𝟐 ; 𝒇𝒐𝒓 𝒂𝒍𝒍 𝒏 ≥ 𝟔

Definition of Big-omega is f(n)≥cg(n); f(n)=Ω(g(n))


Find some positive constant c and n0 Where c=2 and n0=6
Θ, Asymptotically tight bound

f(n) is theta g(n), f(n) = Θ(g(n)), if for sufficiently large n, the function T(n) is
bounded from both above and below by a constant multiple of g(n).

c2g(n)

f(n
Rate
Growth

) f(n) = Θ (g(n)), if and only if


c1g(n)
f(n) = O(g(n)) and
f(n) =Ω (g(n)).

0≤c1g(n) ≤ f(n) ≤c2g(n); for all


0 n0 n>=n0 and +ve Constant c1 ,c2
Input n
Little-o(o) Notation
• Let f(n) and g(n) be functions that map positive integers to positive real numbers.

• We say that f(n) is o(g(n)) (or f(n) ∈ o(g(n))) if for any real constant c > 0, there
exists an integer constant n0 ≥ 1 such that f(n) < c ∗ g(n) for every integer n ≥ n0.
Little–Omega, ω()

Little–Omega, ω(): Let f(n) and g(n) be functions that map positive integers to positive real
numbers.

We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real constant c > 0, there exists
an integer constant n0 ≥ 1 such that f(n) > c · g(n) for every integer n ≥ n0.
o(f(n)) Visualize the relationships
O(f(n)) between these notations:

Growth Rate
f(n)

Ω(f(n))

w(f(n))

Input n
Relationship between asymptotic notations
O(Big-
Oh)
Ω(Big-Omega)

Θ
o(little-oh)

Little-
omega(ω)

Little-o U Θ = Big-O
Little-omega(ω) UΘ =Big-Omega(Ω)
O(Big-Oh) ∩ Ω(Big-Omega) =Θ
Asymptotic Growth
for(i=1;i<n;i++) N times
Linear
{ Loop
N-1 times
Print(“Hello”)
T(n)=Θ(n)
{

For(i=1;i<n;i++) N times (Outer loop)


Quadratic Loop
{
for(j=1;j<n;j++) N times (inner loop)

{ T(n)=Θ(n2)
print(“hello”);
} T(n) = No. of outer loop x No. of inner Loop
}
For(i=1;i<n;i++) N times T(n) = O(n2)
{
For(j=i; j<n;j++) (N -1)/2 times
{
Print (“Hello”); Dependent
} Quadratic Loop

For (i=1; i<n; i=i*2) Log(n) times


{ Logarithmic
loop
Printf(“hello”);
} T(n)=θ(log(n))
For(i=n;i>1; i=i/2) Log(n)
{ times Logarithmic loop
Printf(“hello”)
T(n)=θ(log(n))
}

For(i=1;i<n;i++) N times
{ T(n)=Θ(nlog(n)
For(j=1;j<n; j=j*2) )
{ Log(n) times

Printf(“hello”)
}
For(i=n; I >1; i--) N times
{
T(n)=Θ(n2 log(n)
for(j=1;j<n;j++} N times
{
for(k=n; k>1; k=k/2)
Log(n) times
{
printf(“Hello”);
}}}

for(j=1;j<n;j++} N times
{ T(n)=Θ(nlog(n)
for(k=n; k>1; k=k/2) Log(n) times
{
printf(“Hello”);
}}
Asymptotic Growth

• N1/2 , < N, < N2 < N3 < N4 < ……………………Nk <…….. <NN


log(N)
Log(N) < N1/2 < N

N N1/2 log(N)
1 Complex no. 0
2 1.4 1
3 1.73 1 Log(3) <2
4 2 2
…….. ……….. ……
16 4 4
………….. ………. …….
64 8 6
1024 32 10
2048 45.25 20
Arrange the following (asymptotically) increasing order of growth rate

n, nlog(n), n1/2 , n2, n2 log(n), log(n)


for(i=1;i<n;i++) N times
Linear
{ Loop
Print(“Hello”) N-1 times
T(n)=Θ(n)
{

For(i=1;i<n;i++) N times (Outer loop)


Quadratic Loop
{
for(j=1;j<n;j++) N times (inner loop)

{ T(n)=Θ(n^2)
print(“hello”);
} T(n) = No. of outer loop x No. of inner Loop
}
For(i=1;i<n;i++) N times T(n) = O(n^2)
{
For(j=i; j<n;j++) (N -1)/2 times
{
Print (“Hello”); Dependent
} Quadratic Loop

For (i=1; i<n; i=i*2) Log(n) times


{ Logarithmic
loop
Printf(“hello”);
}
For(i=n;i>1; i=i/2) Log(n)
{ times Logarithmic loop
Printf(“hello”)
}

For(i=1;i<n;i++) N times
{ T(n)=Θ(nlog(n)
For(j=1;j<n; j=j*2) )
{ Log(n) times

Printf(“hello”)
}
DESIGN AND ANALYSIS OF
ALGORITHM
Solving
Recurrences:
Substitution,
Prepared by Iteration, Master
Method
Murari Kumar Singh
Solving
Recurrences: Substitution,
Iteration, Master Method
• Many algorithms, particularly divide and conquer algorithms, have time
complexities which are naturally modelled by recurrence relations.

• A recurrence relation is an equation which is defined in terms of itself

Why are recurrences good things ?

• Many natural functions are easily expressed as recurrences


Why are recurrences good things ?

• Many natural functions are easily expressed as recurrences

• Recurrences are like solving integrals, differential equations, etc.


Substitution method
The most general method:

Guess a solution and prove by induction

1.Guess the form of the solution.

2.Verify by induction.

3.Solve for constants


Substitution method
Example: T(n) = 4T(n/2) + n
• [Assume that T(1) = Q(1).]
• Guess O(n3) . (Prove O and W separately.)
• Assume that T(k)  ck3 for k < n .
• Prove T(n)  cn3 by induction.
Example of substitution
• Assume that T(k)  ck3 for k < n .
T (n) = 4T(n / 2) + n
 4c(n / 2)3 + n
= (c / 2)n3 + n
= cn3 − ((c / 2)n3 − n) desired – residual
 cn3 desired
whenever (c/2)n3 – n  0, for
example, if c  2 and n  1.
residual
Example (continued)
• We must also handle the initial conditions,
that is, ground the induction with base
cases.
• Base: T(n) = Q(1) for all n < n0, where n0
is a suitable constant.
• For 1  n < n0, we have “Q(1)”  cn3, if we
pick c big enough.

This bound is not tight!


A tighter upper bound?
We shall prove that T(n) = O(n2).
Assume that T(k)  ck2 for k < n:
T (n) = 4T(n / 2) + n
 4cn2 + n
= O(n) Wrong!
= cn2 − (−n) [ desired – residual ]
 cn2
for no choice of c > 0. Lose!
Recursion-tree method
• A recursion tree models the costs (time) of a recursive execution of an algorithm.

• The recursion tree method is good for generating guesses for the substitution method.

• The recursion-tree method can be unreliable, just like any method that uses
ellipses (…).
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
T(n)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
T(n/4) T(n/2)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2 (n/2)2

T(n/16) T(n/8) T(n/8) T(n/4)


Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2

Q(1)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2 n2
(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2

Q(1)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2 n2
(n/4)2 (n/2)2 5 n2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2

Q(1)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2 n2
(n/4)2 (n/2)2 5 n2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2 25 n 2
256


Q(1)
Conclusion

• Next time: applying the master method.


• For proof of master theorem, see text book.
DESIGN AND ANALYSIS OF
ALGORITHM

Solving recurrence by
The master method
Prepared by
Murari Kumar Singh
The master method

The master method applies to recurrences of


the form
T(n) = a T(n/b) + f (n) ,
where a  1, b > 1, and f is asymptotically
positive.
Three common cases
Compare f (n) with nlogba:
1. f (n) = O(nlogba – ) for some constant  > 0.
• f (n) grows polynomially slower than nlogba
(by an n factor).
Solution: T(n) = Q(nlogba) .
2. f (n) = Q(nlogba lgkn) for some constant k  0.
• f (n) and nlogba grow at similar rates.
Solution: T(n) = Q(nlogba lgk+1n) .
Three common cases (cont.)
Compare f (n) with nlogba:
3. f (n) = W(nlogba + ) for some constant  > 0.
• f (n) grows polynomially faster than nlogba (by
an n factor),
and f (n) satisfies the regularity condition that
a f (n/b)  c f (n) for some constant c < 1.
Solution: T(n) = Q( f (n)) .
Examples

Ex. T(n) = 4T(n/2) + n


a = 4, b = 2  nlogba = n2; f (n) = n.
CASE 1: f (n) = O(n2 – ) for  = 1.
 T(n) = Q(n2).

Ex. T(n) = 4T(n/2) + n2


a = 4, b = 2  nlogba = n2; f (n) = n2.
CASE 2: f (n) = Q(n2lg0n), that is, k = 0.
 T(n) = Q(n2lg n).
Examples
Ex. T(n) = 4T(n/2) + n3
a = 4, b = 2  nlogba = n2; f (n) = n3.
CASE 3: f (n) = W(n2 + ) for  = 1
and 4(cn/2)3  cn3 (reg. cond.) for c = 1/2.
 T(n) = Q(n3).
Ex. T(n) = 4T(n/2) + n2/lgn
a = 4, b = 2  nlogba = n2; f (n) = n2/lg n.
Master method does not apply. In particular,
for every constant  > 0, we have n = w(lgn).
General method (Akra-Bazzi)
k
T (n) = aiT (n / bi ) + f (n)
i=1

Let p be the unique solution to

( )= 1.
k
p
ai /bi
i=1
Then, the answers are the same as for the
master method, but with np instead of nlogba.
(Akra and Bazzi also prove an even more
general result.)
Idea of master theorem
Recursion tree:
f (n) f (n)
a
f (n/b) f (n/b) … f (n/b) a f (n/b)
h = logbn a
f (n/b2) f (n/b2) … f (n/b2) a2 f(n/b2)


#leaves = ah
= alogbn nlogba (1)
 (1)
= nlogba
Idea of master theorem
Recursion tree:
f (n) f (n)
a
f (n/b) f (n/b) … f (n/b) a f (n/b)
h = logbn a
f (n/b2) f (n/b2) … f (n/b2) a2 f(n/b2)


CASE 1: The weight increases
geometrically from the root to the nlogba (1)
 (1) leaves. The leaves hold a constant
fraction of the total weight. Q(nlogba)
Idea of master theorem
Recursion tree:
f (n) f (n)
a
f (n/b) f (n/b) … f (n/b) a f (n/b)
h = logbn a
f (n/b2) f (n/b2) … f (n/b2) a2 f(n/b2)


CASE 2: (k = 0) The weight
nlogba (1)
 (1) is approximately the same on
each of the logbn levels.
Q(nlogbalg n)
Idea of master theorem
Recursion tree:
f (n) f (n)
a
f (n/b) f (n/b) … f (n/b) a f (n/b)
h = logbn a
f (n/b2) f (n/b2) … f (n/b2) a2 f(n/b2)


CASE 3: The weight decreases
geometrically from the root to the nlogba (1)
 (1) leaves. The root holds a constant
fraction of the total weight. Q( f (n))
Lets revise
The Master Theorem
• Given: a divide and conquer algorithm
• An algorithm that divides the problem of size n into a subproblems, each of size n/b
• Let the cost of each stage (i.e., the work to divide the problem + combine solved subproblems)
be described by the function f(n)
• Then, the Master Theorem gives us a cookbook for the algorithm’s running time:
The Master Theorem
• if T(n) = aT(n/b) + f(n) then

 

 Q (
n )
logb a
( )
f (n) = O n logb a − 

 
   0
T (n) = Q n ( logb a
log n ) (
f ( n) = Q n )
logb a

  c 1
 
Q( f (n) ) ( )
f (n) = W n logb a + AND 
 
 af (n / b)  cf (n) for large n
Using The Master Method
• T(n) = 9T(n/3) + n
• a=9, b=3, f(n) = n
• nlogb a = nlog3 9 = Q(n2)
• Since f(n) = O(nlog3 9 - ), where =1, case 1 applies:

• Thus the solution is T(n) = Q(n2)

T ( n) = Q n ( ) when f (n) = O(n


logb a logb a −
)
More Examples of Master’s Theorem
• T(n) = 3T(n/5) + n
• T(n) = 2T(n/2) + n
• T(n) = 2T(n/2) + 1
• T(n) = T(n/2) + n
• T(n) = T(n/2) + 1
When Master’s Theorem cannot be applied

• T(n) = 2T(n/2) + n logn


• T(n) = 2T(n/2) + n/ logn
THANK YOU

You might also like