0% found this document useful (0 votes)
16 views72 pages

Daa Unt 1

The document provides an overview of algorithm analysis, including asymptotic notation (Big-Oh, Omega, Theta), recursive algorithm analysis, and specific techniques like the Master Theorem and loop invariants. It discusses characteristics and expectations of algorithms, such as correctness and resource efficiency, and details the analysis of insertion sort and merge sort algorithms. Additionally, it covers the principles of algorithm design, including divide-and-conquer strategies and the significance of time complexity in evaluating algorithm performance.

Uploaded by

cse21216
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views72 pages

Daa Unt 1

The document provides an overview of algorithm analysis, including asymptotic notation (Big-Oh, Omega, Theta), recursive algorithm analysis, and specific techniques like the Master Theorem and loop invariants. It discusses characteristics and expectations of algorithms, such as correctness and resource efficiency, and details the analysis of insertion sort and merge sort algorithms. Additionally, it covers the principles of algorithm design, including divide-and-conquer strategies and the significance of time complexity in evaluating algorithm performance.

Uploaded by

cse21216
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Contents

1. Use of asymptotic notation,


a) Big-Oh,
b) Omega
c) Theta.
d) Little-Oh, and
e) Little-Omega Notation
2. Analyzing Recursive Algorithms:
a) Recurrence relations,
b) Specifying runtime of recursive algorithms,
c) Solving recurrence equations.
d) Master Theorem.
3. Case Study: Analysing Algorithms
Algorithms and it’s Specification

• An algorithm is a step-by-step procedure for


performing some task in a finite amount of time.
• Data structure is a systematic way of organizing and
accessing data.
• These concepts are central to computing, but to be
able to classify some algorithms and data structures as
"good:' we must have precise ways of analyzing them.
Characteristics of an algorithm:-

· Must take an input.


· Must give some output(yes/no, value etc.)
· Definiteness –each instruction is clear and unambiguous.
· Finiteness –algorithm terminates after a finite number of
steps.
· Effectiveness –every instruction must be basic i.e. simple
instruction.
Expectation from an algorithm

• Correctness:-
Correct: Algorithms must produce correct result.
Approximation algorithm: Exact solution is not found,
but near optimal solution can be found out. (Applied to
optimization problem.)
• Less resource usage:
Algorithms should use less resources (time and space).
To analyse an algorithm
• Code and execute, find actual time.
• What does the total time depend upon
§ Algorithm
§ Number of inputs
§ Count the number of primitive operations like
assignment, function call, control transfer,
arithmetic etc.
• Solution to all issues is : Asymptotic analysis of
algorithms.

5
Analyzing pseudo-code (by counting)
1. For each line of pseudo-code, count the number of
primitive operations in it.
Pay attention to the word "primitive" here; sorting an array is
not a primitive operation.
2. Multiply this count with the number of times this line
is executed.
3. Sum up over all lines.

Analysis of Algorithms 6
Proving Loop Invariants
• Proving loop invariants works like induction
• Initialization (base case):
– It is true prior to the first iteration of the loop
• Maintenance (inductive step):
– If it is true before an iteration of the loop, it remains true before
the next iteration
• Termination:
– When the loop terminates, the invariant gives us a useful
property that helps show that the algorithm is correct
– Stop the induction when the loop terminates

7
Analysis of Insertion Sort
INSERTION-SORT(A) cost times
for j ← 2 to n c1 n
do key ← A[ j ] c2 n-1
Insert A[ j ] into the sorted sequence A[1 . . j -1] 0 n-1
i←j-1 c4 n-1
å
n
while i > 0 and A[i] > key c5 j =2 j
t
å
n
do A[i + 1] ← A[i] c6 j =2
(t j - 1)
å
n
i←i–1 c7 j =2
(t j - 1)
A[i + 1] ← key c8 n-1
tj: # of times the while statement is executed at iteration j

T (n) = c1n + c2 (n - 1) + c4 (n - 1) + c5 å t j + c6 å (t j - 1) + c7 å (t j - 1) + c8 (n - 1)
n n n

j =2 j =2 j =2
8
Best Case Analysis
• The array is already sorted “while i > 0 and A[i] > key”

– A[i] ≤ key upon the first time the while loop test is run
(when i = j -1)

– tj = 1

• T(n) = c1n + c2(n -1) + c4(n -1) + c5(n -1) + c8(n-1) = (c1 + c2
+ c4 + c5 + c8)n + (c2 + c4 + c5 + c8)

= an + b = Q(n)

T (n) = c1n + c2 (n - 1) + c4 (n - 1) + c5 å t j + c6 å (t j - 1) + c7 å (t j - 1) + c8 (n - 1)
n n n

j =2 j =2 j =2
9
Worst Case Analysis
• The array is in reverse sorted order“while i > 0 and A[i] > key”
– Always A[i] > key in while loop test
– Have to compare key with all elements to the left of the j-th
position Þ compare with j-1 elements Þ tj = j
n
n(n + 1) n
n(n + 1) n
n(n - 1)
using å
j =1
j =
2
=> å
j =2
j =
2
- 1 => å ( j -1) =
j =2 2
we have:

æ n(n + 1) ö n(n - 1) n(n - 1)


T (n ) = c1n + c2 (n - 1) + c4 (n - 1) + c5 ç - 1÷ + c6 + c7 + c8 (n - 1)
è 2 ø 2 2

= an 2 + bn + c a quadratic function of n

• T(n) = Q(n2) order of growth in n2


T (n) = c1n + c2 (n - 1) + c4 (n - 1) + c5 å t j + c6 å (t j - 1) + c7 å (t j - 1) + c8 (n - 1)
n n n

j =2 j =2 j =2 10
Comparisons and Exchanges in Insertion
Sort
cost times
INSERTION-SORT(A)
c1 n
for j ← 2 to n
c2 n-1
do key ← A[ j ]
Insert A[ j ] into the sorted sequence A[1 . . j -1] 0 n-1

i←j-1 » n2/2 comparisons c4 n-1


å
n
while i > 0 and A[i] > key c5 j =2 j
t

c6 å
n
do A[i + 1] ← A[i] j =2
(t j - 1)
i←i–1
exchanges c7 å
n
» n2/2 j =2
(t j - 1)
A[i + 1] ← key
c8 n-1
11
Time complexity analysis-some
general rules.

12

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Order of Growth

• Simplifying abstraction: interested in rate of


growth or order of growth of the running time of
the algorithm
• Allows us to compare algorithms without worrying
about implementation performance
• Usually only highest order term without constant
coefficient is taken
• Uses “theta” notation
– Best case of insertion sort is Q(n)
– Worst case of insertion sort is Q(n2)
Designing Algorithms

• Several techniques/patterns for designing


algorithms exist
• Incremental approach: builds the solution one
component at a time
• Divide-and-conquer approach: breaks original
problem into several smaller instances of the same
problem
– Results in recursive algorithms
– Easy to analyze complexity using proven techniques
Divide-and-Conquer

• Technique (or paradigm) involves:


– “Divide” stage: Express problem in terms of
several smaller subproblems
– “Conquer” stage: Solve the smaller
subproblems by applying solution recursively
– smallest subproblems may be solved
directly
– “Combine” stage: Construct the solution to
original problem from solutions of smaller
subproblem
Merge Sort Strategy
n
(unsorted)
• Divide stage: Split the n-element
sequence into two subsequences
of n/2 elements each n/2 n/2
(unsorted) (unsorted)

• Conquer stage: Recursively sort


the two subsequences MERGE SORT MERGE SORT

• Combine stage: Merge the two n/2 n/2


(sorted) (sorted)
sorted subsequences into one
sorted sequence (the solution)
MERGE

n
(sorted)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Merging Sorted Sequences


Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Merging Sorted Sequences


• Combines the sorted
Q(1) subarrays A[p..q] and
A[q+1..r] into one sorted
array A[p..r]
Q(n)
• Makes use of two working
arrays L and R which
initially hold copies of the
Q(1) two subarrays
• Makes use of sentinel
value (¥) as last element
to simplify logic
Q(n)
Merge Sort Algorithm

Q(1)
T(n/2)
T(n/2)
Q(n)

T(n) = 2T(n/2) + Q(n)


Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Analysis of Merge Sort


Analysis of recursive calls …
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Analysis of Merge Sort

T(n) = cn(lg n + 1)
= cnlg n + cn

T(n) is Q(n lg n)
Asymptotic Notation

a) Big-Oh
b) Omega
c) Theta
d) Little-Oh, and
e) Little-Omega Notation
Symbol Meaning

$ (there exists): ∃ x: P(x) means there is at least one x such


that P(x) is true
∀ (for all; for any; for each): ∀ x: P(x) or (x) P(x) means P(x)
is true for all x
⊃: superset
⊂: subset
Asymptotic Complexity
Running time of an algorithm as a function of input size n
for large n.
Expressed using only the highest-order term in the
expression for the exact running time.
– Instead of exact running time, say Q(n2).
Describes behavior of function in the limit.
Written using Asymptotic Notation.
Asymptotic Notation
Q, O, W, o, w
Defined for functions over the natural numbers.
– Ex: f(n) = Q(n2).
– Describes how f(n) grows in comparison to n.
Define a set of functions; in practice used to
compare two function sizes.
The notations describe different rate-of-growth
relations between the defining function and the
defined set of functions.
Q-notation
For function f(n), we define Q(g(n)),
big-Theta of n, as the set:
Q(g(n)) = {f(n) :
$ positive constants c1, c2, and n0,
such that "n ³ n0,
we have 0 £ c1g(n) £ f(n) £ c2g(n)
}
Intuitively: Set of all functions that
have the same rate of growth as g(n).
g(n) is an asymptotically tight bound for f(n).
Example
Q(g(n)) = {f(n) : $ positive constants c1, c2, and n0,
such that "n ³ n0, 0 £ c1g(n) £ f(n) £ c2g(n)}
n2 /2 - 2n = Q(n2)
What constants for n0, c1, and c2 will work?
Make c1 a little smaller than the leading coefficient, and c2 a
little bigger.
To compare orders of growth, look at the leading term.
Exercise: Prove that 2n2= Q(n2)
• n2 /2 - 2n = Q(n2)
• C1=1/4, C2=1/2 and n0=8

• 2n2= Q(n2)
• C1=1, C2=3 (or C1=C2=2) and n0=0
O-notation
For function f(n), we define O(g(n)),
big-O of n, as the set:
O(g(n)) = {f(n) :
$ positive constants c and n0,
such that "n ³ n0,
we have 0 £ f(n) £ cg(n) }
Intuitively: Set of all functions
whose rate of growth is the same as
or lower than that of g(n).
g(n) is an asymptotic upper bound for f(n).
f(n) = Q(g(n)) Þ f(n) = O(g(n)).
Q(g(n)) Ì O(g(n)).
Comp 122
w Given functions f(n) and g(n), we say that f(n) is O(g(n)) if
there are positive constants
c and n0 such that
f(n) £ cg(n) for n ³ n0
w Example: 2n + 10 is O(n)
s 2n + 10 £ cn
s (c - 2) n ³ 10
s n ³ 10/(c - 2)
s Pick c = 3 and n0 = 10

Comp 122
More Big-Oh Examples
7n-2
7n-2 is O(n)
need c > 0 and n0 ³ 1 such that 7n-2 £ c•n for n ³ n0
this is true for c = 7 and n0 = 1

n 3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 ³ 1 such that 3n3 + 20n2 + 5 £ c•n3 for n ³ n0
this is true for c = 4 and n0 = 21

n 3 log n + 5
3 log n + 5 is O(log n)
need c > 0 and n0 ³ 1 such that 3 log n + 5 £ c•log n for n ³ n0
this is true for c = 8 and n0 = 2
Analysis of Algorithms 31
Big-Oh Rules (shortcuts)
If f(n) is a polynomial of degree d, then f(n) is
O(nd), i.e.,
1. Drop lower-order terms
2. Drop constant factors
Use the smallest possible class of functions
– Say “2n is O(n)” instead of “2n is O(n2)”
Use the simplest expression of the class
– Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”

Analysis of Algorithms 32
Examples
O(g(n)) = {f(n) : $ positive constants c and n0,
such that "n ³ n0, we have 0 £ f(n) £ cg(n) }
2n2 =O(n3 )
2n2 = O(n2 )
• 2n2 =O(n3 )
• C=1 and n0=2

• 2n2 = O(n2 )
• C=2 and n0=0
W -notation
For function f(n), we define W(g(n)),
big-Omega of n, as the set:
W(g(n)) = {f(n) :
$ positive constants c and n0,
such that "n ³ n0,
we have 0 £ cg(n) £ f(n)}
Intuitively: Set of all functions
whose rate of growth is the same
as or higher than that of g(n).
g(n) is an asymptotic lower bound for f(n).
f(n) = Q(g(n)) Þ f(n) = W(g(n)).
Q(g(n)) Ì W(g(n)).
Comp 122
Example
W(g(n)) = {f(n) : $ positive constants c and n0, such
that "n ³ n0, we have 0 £ cg(n) £ f(n)}

Ön = W(lg n). Choose c and n0.


C=1 and n0=16
o-notation
For a given function f(n), the set little-o:
o(g(n)) = {f(n): " c > 0, $ n0 > 0 such that
" n ³ n0, we have 0 £ f(n) < cg(n)}.
f(n) becomes insignificant relative to g(n) as n
approaches infinity:
lim [f(n) / g(n)] = 0 {a/∞ = 0}
n®¥

g(n) is an upper bound for f(n) that is not


asymptotically tight.
Observe the difference in this definition from previous
ones. Why?
Ex:
• 2n=o(n2)
but
• 2n2≠o(n2)
w -notation
For a given function f(n), the set little-omega:

w(g(n)) = {f(n): " c > 0, $ n0 > 0 such that


" n ³ n0, we have 0 £ cg(n) < f(n)}.
f(n) becomes arbitrarily large relative to g(n) as n
approaches infinity:
lim [f(n) / g(n)] = ¥. {∞/a =∞}
n®¥

g(n) is a lower bound for f(n) that is not


asymptotically tight.
Ex:
• n2/2=ω(n)
but
• n2/2≠ ω(n2)
Recursive Algorithms & Recurrence relations

1. Analyzing Recursive Algorithms:


a) Recurrence relations,
b) Specifying runtime of recursive algorithms,
c) Solving recurrence equations.
d) Master Theorem.
2. Case Study: Analyzing Algorithms
Analysing Recursive Algorithms

• Iteration is not the only interesting way of solving a


problem. Another useful technique, which is employed by
many algorithms, is to. use recursion.
• In this technique, we define a procedure P that is allowed
to make calls to itself as a subroutine, provided those calls
to p are for solving sub problems, of smaller size.
• The subroutine calls to P on smaller instances are called
"recursive calls?'
• A recursive procedure should always define a base. case,
which is small enough that the algorithm can solve it
directly without using recursion
• Analyzing the running time of a recursive algorithm takes
a bit of additional work, however*
• In particular, to analyze such a running time, we use a
recurrence equation, which defines mathematical
statements that the running time of a recursive algorithm
must satisfy.
• We introduce a function T(n) that denotes the running
time of the algorithm, on an input of size n, and we write
equations that T(n) must satisfy
• For example,
1 if n=1
T(n) =
T(n-1) + 1 if n>1
Recurrences

• When an algorithm contains a recursive call to itself, its


running time can often be described by a recurrence.
• A recurrence is a function defined in terms of:
• One or more base case, and
• Itself with smaller arguments
• Ex
1 if n=1
T(n) =
T(n-1) + 1 if n>1

• Solution: T(n)=O(n)
1 if n=1
T(n) =
2T(n/2) + n if n>1

Solution: T(n)=O(nlgn)

1 if n=1
T(n) =
T(n/3) + T(2n/3)+n if n>1

Solution: T(n)=O(nlgn)
Solving recurrence equations.

• The Iterative Substitution Method


• The Recursion Tree
• The Guess-and-Test Method
• The Master Method
Iterative Substitution
In the iterative substitution, or “plug-and-chug,” technique, we iteratively
apply the recurrence equation to itself and see if we can find a
pattern:
T ( n ) = 2T ( n / 2) + bn
= 2( 2T ( n / 22 )) + b( n / 2)) + bn
= 22 T ( n / 22 ) + 2bn
= 23 T ( n / 23 ) + 3bn
= 24 T ( n / 24 ) + 4bn
= ...
= 2i T ( n / 2i ) + ibn
Note that base, T(n)=b, case occurs when 2i=n. That is, i = log n.
So,
T (n) = bn + bn log n
Thus, T(n) is O(n log n).
Divide-and-Conquer 47
Solving T(n) = 3T(n-2) with iterative method

T(n)=3T(n−2)
My first step was to iteratively substitute terms to arrive at a general
form:
T(n−2)=3T(n−2−2)
=3T(n−4)
T(n)=3∗3T(n−4)
Leading to the general form:
T(n)=3k T(n-2k)
n−2k=1 for k, which is the point where the recurrence stops
(where T(1)) and
Insert that value (n/2−1/2=k) into the general form:
T(n)=3n/2-1/2
Recursion Tree Method to Solve Recurrence Relations
Recursion Tree is another method for solving the recurrence
relations.
A recursion tree is a tree where each node represents the cost
of a certain recursive sub-problem.
We sum up the values in each node to get the cost of the
entire algorithm.
Steps in Recursion Tree Method to Solve Recurrence
Relations

Step-01:
Draw a recursion tree based on the given recurrence
relation.
Step-02:
Determine-
– Cost of each level
– Total number of levels in the recursion tree
– Number of nodes in the last level
– Cost of the last level
Step-03:
Add cost of all the levels of the recursion tree and simplify
the expression so obtained in terms of asymptotic notation.
The Recursion Tree
Draw the recursion tree for the recurrence relation and look for a
pattern:
ì b if n < 2
T (n) = í
î2T (n / 2) + bn if n ³ 2
time
depth T’s size
0 1 n bn

1 2 n/2 bn

i 2i n/2i bn

… … … …
Total time = bn + bn log n

(last level plus all previous levels)


Divide-and-Conquer 51
Recursion-Tree Method
T(n) = T(n/3) + T(2n/3) + O(n)
Guess-and-Test Method
In the guess-and-test method, we guess a closed form solution and
then try to prove it is true by induction:
ì b if n < 2
T (n) = í
î2T (n / 2) + bn log n if n ³ 2
Guess: T(n) < cn log n.

T ( n ) = 2T (n / 2) + bn log n
= 2(c( n / 2) log(n / 2)) + bn log n
= cn (log n - log 2) + bn log n
= cn log n - cn + bn log n

Wrong: we cannot make this last line be less than cn log n

Divide-and-Conquer 53
Guess-and-Test Method, Part 2
Recall the recurrence equation:
ì b if n < 2
T (n) = í
î2T (n / 2) + bn log n if n ³ 2
Guess #2: T(n) < cn log n.
2

T (n) = 2T (n / 2) + bn log n
= 2(c(n / 2) log 2 (n / 2)) + bn log n
= cn(log n - log 2) 2 + bn log n
= cn log 2 n - 2cn log n + cn + bn log n
– if c > b. £ cn log 2 n
So, T(n) is O(n log2 n).
In general, to use this method, you need to have a good guess and you
need to be good at induction proofs.

Divide-and-Conquer 54
Master Method
Many divide-and-conquer recurrence equations have the
form:
ì c if n < d
T (n) = í
îaT (n / b) + f (n) if n ³ d
The Master Theorem:
1. if f (n) is O(n logb a -e ), then T (n) is Q(n logb a )
2. if f (n) is Q(n logb a log k n), then T (n) is Q(n logb a log k +1 n)
logb a +e
3. if f (n) is W(n ), then T (n) is Q( f (n)),
provided af (n / b) £ df (n) for some d < 1.

Divide-and-Conquer 55
Master Method, Example 1
The form: ì c if n < d
T (n) = í
îaT (n / b) + f (n) if n ³ d
The Master Theorem:
1. if f (n) is O(n logb a -e ), then T (n) is Q(n logb a )
2. if f (n) is Q(n logb a log k n), then T (n) is Q(n logb a log k +1 n)
3. if f (n) is W(n logb a +e ), then T (n) is Q( f (n)),
provided af (n / b) £ df (n) for some d < 1.
Example:
T (n) = 4T (n / 2) + n
Solution: logba=2, so case 1 says T(n) is O(n2).

Divide-and-Conquer 56
Master Method, Example 2
The form: ì c if n < d
T (n) = í
îaT (n / b) + f (n) if n ³ d
The Master Theorem:
1. if f (n) is O(n logb a -e ), then T (n) is Q(n logb a )
2. if f (n) is Q(n logb a log k n), then T (n) is Q(n logb a log k +1 n)
3. if f (n) is W(n logb a +e ), then T (n) is Q( f (n)),
provided af (n / b) £ df (n) for some d < 1.
Example:
T (n) = 2T (n / 2) + n log n
Solution: logba=1, so case 2 says T(n) is O(n log2 n).

Divide-and-Conquer 57
Master Method, Example 3
The form: ì c if n < d
T (n) = í
îaT (n / b) + f (n) if n ³ d
The Master Theorem:
1. if f (n) is O(n logb a -e ), then T (n) is Q(n logb a )
2. if f (n) is Q(n logb a log k n), then T (n) is Q(n logb a log k +1 n)
3. if f (n) is W(n logb a +e ), then T (n) is Q( f (n)),
provided af (n / b) £ df (n) for some d < 1.
Example:
T (n) = T (n / 3) + n log n
Solution: logba=0, so case 3 says T(n) is O(n log n).

Divide-and-Conquer 58
Master Method, Example 4
The form: ì c if n < d
T (n) = í
îaT (n / b) + f (n) if n ³ d
The Master Theorem:
1. if f (n) is O(n logb a -e ), then T (n) is Q(n logb a )
2. if f (n) is Q(n logb a log k n), then T (n) is Q(n logb a log k +1 n)
3. if f (n) is W(n logb a +e ), then T (n) is Q( f (n)),
provided af (n / b) £ df (n) for some d < 1.
Example:
T (n) = 8T (n / 2) + n 2

Solution: logba=3, so case 1 says T(n) is O(n3).

Divide-and-Conquer 59
Master Method, Example 5
The form: ì c if n < d
T (n) = í
îaT (n / b) + f (n) if n ³ d
The Master Theorem:
1. if f (n) is O(n logb a -e ), then T (n) is Q(n logb a )
2. if f (n) is Q(n logb a log k n), then T (n) is Q(n logb a log k +1 n)
3. if f (n) is W(n logb a +e ), then T (n) is Q( f (n)),
provided af (n / b) £ df (n) for some d < 1.
Example:
T (n) = 9T (n / 3) + n 3

Solution: logba=2, so case 3 says T(n) is O(n3).

Divide-and-Conquer 60
Master Method, Example 6
The form: ì c if n < d
T (n) = í
îaT (n / b) + f (n) if n ³ d
The Master Theorem:
1. if f (n) is O(n logb a -e ), then T (n) is Q(n logb a )
2. if f (n) is Q(n logb a log k n), then T (n) is Q(n logb a log k +1 n)
3. if f (n) is W(n logb a +e ), then T (n) is Q( f (n)),
provided af (n / b) £ df (n) for some d < 1.
Example:
T (n) = T (n / 2) + 1 (binary search)

Solution: logba=0, so case 2 says T(n) is O(log n).

Divide-and-Conquer 61
Master Method, Example 7
The form: ì c if n < d
T (n) = í
îaT (n / b) + f (n) if n ³ d
The Master Theorem:
1. if f (n) is O(n logb a -e ), then T (n) is Q(n logb a )
2. if f (n) is Q(n logb a log k n), then T (n) is Q(n logb a log k +1 n)
3. if f (n) is W(n logb a +e ), then T (n) is Q( f (n)),
provided af (n / b) £ df (n) for some d < 1.
Example:

T (n) = 2T (n / 2) + log n (heap construction)

Solution: logba=1, so case 1 says T(n) is O(n).

Divide-and-Conquer 62
Solve:
T(n) = 9T(n/3)+n.
Example 1: T(n) = 9T(n/3)+n.
Here a = 9, b = 3, f(n) = n, and nlogb a = nlog3 9 = Θ(n2). Since
=1

case 1 of the master theorem applies, and the solution is


T(n) = Θ(n2).
Solve:
T(n) = T(2n/3)+1.
T(n) = T(2n/3)+1.
Here a = 1, b = 3/2, f(n) = 1, and nlogb a = n0 = 1.
Since f(n) = Θ(nlogb a), case 2 of the master theorem applies,
so the solution is T(n) = Θ(logn).
T(n) = 3T(n/4) + nlogn.
T(n) = 3T(n/4) + nlogn.
Here a=3, b=4, f(n)=nlogn

2
we have )
So case 3 applies if we can show that
af(n/b) ≤ cf(n) for some c < 1 and all sufficiently large n.

This would mean 3


Setting c = 3/4 would cause this condition to be satisfied.

Solution is T(n)=O(nlogn)
Changing variables
• Examble: Consider the recurrence

• Which looks difficult. We can simplify this recurrence, though,


with a change of variables. For convenience, we shall not
worry about rounding off values, such as , to be integers.
• Renaming m = log n {n=2m }yields
• T(2m) = 2T(2m/2) + m.
• We can now rename S(m) = T(2m) to produce the new
recurrence
• S(m) = 2S( m/2) + m,
• which has the solution: S(m) = O(m lg m).
• Changing back from S(m) to T(n), we obtain T(n) = T(2m)
= S(m) = O(m lg m) = O(lg n lg lg n).
Summary

• The running time of algorithm prefixAverages2 is given by


the sum of three terms.
• The first and the third term are 0(n), and the second term
is 0(1).
• Thus the running time of prefixAverages2 is 0(n), which is
much better than the quadratic-time algorithm
prefixAverages1.
Solve using master method
Solution

You might also like