0% found this document useful (0 votes)
5 views

DSA2 - Chap2 - Algorithm Analysis

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

DSA2 - Chap2 - Algorithm Analysis

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

Data Structures and Algorithms 2

Chapter 2
Algorithm Analysis

Dr. Fouzia ANGUEL


2nd year / S3

September 2024 – January 2025


2
Course Outline

❖ Algorithms efficiency
- Machine-dependent vs Machine-independent

❖ Function ordering
- Order of growth
- Weak Order;
- Landau symbols Big-Oh ; Big-omega ; big theta and
Little-oh .

❖ Algorithm complexity analysis


– Rules for complexity analysis
– Analysis of various types of algorithms
– Master Theorem
3
Algorithm Efficiency
Example : Shortest path problem

• A city has n view points


• Buses move from one view point to another
• A bus driver wishes to follow the shortest path (travel time
wise).
• Every view point is connected to another by a road.
• However, some roads are less congested than others.
• Also, roads are one-way, i.e., the road from view point 1 to 2, is
different from that from view point 2 to 1.
4
Algorithm Efficiency
Example : Shortest path problem

How to find the shortest path between any two pairs?


➔ Naïve approach
◆ List all the paths between a given pair of view points
◆ Compute the travel time for each.
◆ Choose the shortest one.
How many paths are there between any two view points
(without revisits)?

n! ≅ (n/e)n
➔ It will be impossible to run the algorithm for n = 30
5
Algorithm efficiency

- Run time in the computer is Machine dependent

Example : Need to multiply two positive integers a and b

Subroutine 1: Multiply a and b

Subroutine 2: V = a, W= b
While W > 1
V →V + a; W →W-1
Output V
6

Algorithm efficiency

First subroutine has 1 multiplication.


Second has b additions and subtractions.

For some architectures, 1 multiplication is more expensive


than b additions and subtractions.

Ideally, we would like to program all choices and run all of them in
the machine we are going to use and find which is efficient!
7

Machine Independent Analysis

We assume that every basic operation takes constant time

Example Basic Operations:


Addition, Subtraction, Multiplication, Memory Access

Non-basic Operations:
Sorting, Searching

Efficiency of an algorithm is the number of basic operations it


performs
We do not distinguish between the basic operations.
8

Subroutine 1 uses 1 basic operation (*)


Subroutine 2 uses 2b basic operations (+, -)

Subroutine 1 is more efficient.

This measure is good for all large input sizes

In fact, we will not worry about the exact values, but will
look at “broad classes” of values.
Let there be n inputs.
If an algorithm needs n basic operations and another
needs 2n basic operations, we will consider them to be
in the same efficiency category.
However, we distinguish between exp(n), n, log(n)
Function Ordering 9

Order of Increase(order of growth)


We worry about the speed of our algorithms for large input sizes.
10

Quadratic Growth
Consider the two functions
f(n) = n2 and g(n) = n2 – 3n + 2
Around n = 0, they look very different
11

Quadratic Growth

Yet on the range n = [0, 1000], they are (relatively) indistinguishable:


12

Quadratic Growth
The absolute difference is large, for example,
f(1000) = 1 000 000
g(1000) = 997 002
but the relative difference is very small

and this difference goes to zero as n → ∞


13

Polynomial Growth
To demonstrate with another example,
f(n) = n6 and g(n) = n6 – 23n5+193n4 –729n3+1206n2 – 648n

Around n = 0, they are very different


14

Polynomial Growth

Still, around n = 1000, the relative difference is less than 3%


15

Polynomial Growth

The justification for both pairs of polynomials being similar is that,


in both cases, they each had the same leading term:
n2 in the first case, n6 in the second

Suppose however, that the coefficients of the leading terms were


different
○ In this case, both functions would exhibit the same rate of growth,
however, one would always be proportionally larger
Weak ordering 16

Consider the following definitions:


○ We will consider two functions to be equivalent, f ~ g, if

where

○ We will state that f < g if

For functions we are interested in, these define a weak ordering


Weak ordering 17

f and g are functions from the set of natural numbers to itself.

Let f(n) and g(n) describe the run-time of two algorithms

○ If f(n) ~ g(n), then it is always possible to improve the performance


of one function over the other by purchasing a faster computer

○ If f(n) < g(n), then you can never purchase a computer fast enough
so that the second function always runs in less time than the first

Note that for small values of n, it may be reasonable to use an algorithm


that is asymptotically more expensive, but we will consider these on a
one-by-one basis
18
Function orders “Landau Symbols”

we will make some assumptions:

○ Our functions will describe the time or memory required to solve a


problem of size n

○ We are restricting to certain functions :


■ They are defined for n ≥ 0
■ They are strictly positive for all n
● In fact, f(n) > c for some value c > 0
● That is, any problem requires at least one instruction and
byte
■ They are increasing (monotonic increasing)
19
Function orders “Landau Symbols”
Big Oh Notation

A function f(n) is O(g(n)) if the rate of growth of f(n) is not


greater (not faster) than that of g(n).

Definition 1
f(n) = O(g(n)) if there are a number n0 and
a nonnegative c such that

for all n ≥ n0 , 0 ≤ f(n) ≤ cg(n).

If exists and is finite, then f(n) is O(g(n)) .

Intuitively, (not exactly) f(n) is O(g(n)) means f(n) ≤ g(n) for all n
beyond some value n0; i.e. g(n) is an upper bound for f(n).
20

Function orders “Landau Symbols”

Example Functions

sqrt(n) , n, 2n, ln n, exp(n), n + sqrt(n) , n + n2

limn→∝ sqrt(n) /n = 0, sqrt(n) is O(n)

limn→∝ n/sqrt(n) = infinity, n is not O(sqrt(n))

limn→∝ n /2n = 1/2, n is O(2n)

limn→∝ 2n /n = 2, 2n is O(n)
21
limn→∝ ln(n) /n = 0, ln(n) is O(n)

limn→∝ n/ln(n) = infinity, n is not O(ln(n))

limn→∝ exp(n)/n = infinity, exp(n) is not O(n)

limn→∝ n/exp(n) = 0, n is O(exp(n))

limn→∝ (n+sqrt(n)) /n = 1, n + sqrt(n) is O(n)

limn→∝ n/(sqrt(n)+n) = 1, n is O(n+sqrt(n))

limn→∝ (n + n2) /n = infinity, n + n2 is not O(n)

limn→∝ n/(n + n2) = 0, n is O(n + n2 )


22

Implication of big-Oh notation

Suppose we know that our algorithm uses at most O(f(n)) basic


steps for any n inputs, and n is sufficiently large,

- then we know that our algorithm will terminate after


executing at most f(n) basic steps.

- We know that a basic step takes a constant time in a


machine.

Hence, our algorithm will terminate in a constant time f(n) units


of time, for all large n.
Function orders “Landau Symbols” 23

Ω “Omega” Notation
Now a lower bound notation, Ω
Definition 2
f(n) = Ω(g(n)) if there are a number n0 and a nonnegative
c such that
for all n ≥ n0 , f(n) ≥ cg(n).

If >0 if exists

We say g(n) is a lower bound on f(n),


i.e. no matter what specific inputs we have, the algorithm
will not run faster than this lower bound.

Suppose, an algorithm has complexity Ω(f(n)) . This means that


there exists a positive constant c such that for all sufficiently large n,
there exists at least one input for which the algorithm consumes at
least c*f(n) steps.
24

Function orders “Landau Symbols”


θ “theta ” Notation

Definition 3
f(n) = θ(g(n)) if and only if f(n) is O(g(n)) and Ω(g(n))
f(n) = θ(g(n)) if there exist positive n0, c1, and c2 such that
c1 g(n) ≤ f(n) ≤ c2 g(n) whenever n ≥ n0

● θ(g(n)) is “`asymptotic equality’’

● is a finite, positive constant, if it exists

A function f(n) is θ(g(n)) if The function f(n) has a rate of


growth equal to that of g(n) . Θ represents a tight bound in
asymptotic analysis, which means it captures both the upper
and lower bounds of a function's growth.
25

Function orders “Landau Symbols”


Little-oh Notation

Definition 4
f(n) = o(g(n)) if for all positive constant c, there exists
an n0 such that :
f(n) < cg(n) when n > n0

Less formally , f(n) = o(g(n)) if f(n) =O(g(n)) and


f(n) ≠ θ(g(n)).
“asymptotic strict inequality’’

= 0 if exists
26
Function orders “Landau Symbols”

Suppose that f(n) and g(n) satisfy

If where , it follows that f(n) = Θ(g(n))

If where , it follows that f(n) = O(g(n))

If , it follows that f(n) = o(g(n))


27

Function orders “Landau Symbols”


28

Function orders “Landau Symbols”


Terminology

Asymptotically less than or equal to O


Asymptotically greater than or equal to Ω
Asymptotically equal to θ
Asymptotically strictly less o
Little-o as a Weak Ordering 29

We can show that, for example


ln( n ) = o( np ) for any p > 0
Proof: Using l’Hôpital’s rule.

If you are attempting to determine

but both , it follows

Repeat as necessary…
Note: the kth derivative will always be shown as
30
Big-Θ as an Equivalence Relation

If we look at the first relationship, we notice that


f(n) = Θ(g(n)) seems to describe an equivalence relation:

1. f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n))

2. f(n) = Θ(f(n))

3. If f(n) = Θ(g(n)) and g(n) = Θ(h(n)), it follows that f(n) = Θ(h(n))

Consequently, we can group all functions into equivalence classes,


where all functions within one class are big-theta Θ of each other
31
Big-Θ as an Equivalence Relation

For example, all of


n2 100000 n2 – 4 n + 19 n2 + 1000000
323 n2 – 4 n ln(n) + 43 n + 10 42n2 + 32
n2 + 61 n ln2(n) + 7n + 14 ln3(n) + ln(n)
are big-Θ of each other
E.g., 42n2 + 32 = Θ( 323 n2 – 4 n ln(n) + 43 n + 10 )

We will select just one element to represent the entire class of


these functions: n2
○ We could choose any function, but this is the simplest
32
Function orders “Landau Symbols”
Terminology
The most common classes are given names:
Θ(1) constant
Θ(ln(n)) or Θ(log(n)) logarithmic
Θ(n) linear
Θ(n ln(n)) “n log n”
Θ(n2) quadratic
Θ(n3) cubic
2n, en, 4n, ... exponential

Recall that all logarithms are scalar multiples of each other


Therefore logb(n)= Θ(ln(n)) for any base b
33

Function orders “Landau Symbols”


Example Functions

sqrt(n) , n, 2n, ln n, exp(n), n + sqrt(n) , n + n2

limn→∝ sqrt(n) /n = 0, sqrt(n) is o(n) and O(n)

limn→∝ n/sqrt(n) = infinity, n is Ω(sqrt(n))

limn→∝ n /2n = 1/2, n is θ(2n)

limn→∝ 2n /n = 2, 2n is θ(n)
34
limn→∝ ln(n) /n = 0, ln(n) is o(n)

limn→∝ n/ln(n) = infinity, n is Ω(ln(n))

limn→∝ exp(n)/n = infinity, exp(n) is Ω(n)

limn→∝ n/exp(n) = 0, n is o(exp(n))

limn→∝ (n+sqrt(n)) /n = 1, n + sqrt(n) is θ(n)

n is θ(n+sqrt(n)),
limn→∝ n/(sqrt(n)+n) = 1,

limn→∝ (n + n2) /n = infinity, n + n2 is Ω(n)

limn→∝ n/(n + n2) = 0, n is o(n + n2 )


35

Algorithms Analysis

An algorithm is said to have polynomial time complexity if its


run-time may be described by O(nd) for some fixed d ≥ 0
○ We will consider such algorithms to be efficient

Problems that have no known polynomial-time algorithms are said


to be intractable

○ Traveling salesman problem: find the shortest path that


visits n cities
○ Best run time: Θ(n2 2n)
36

Complexity of a Problem Vs Algorithm

A problem is O(f(n)) means there is some O(f(n))


algorithm to solve the problem.

A problem is Ω(f(n)) means every algorithm that


can solve the problem is Ω(f(n))
37
Rules for arithmetic with big-O symbols

Rule 1

If T1(n) = O(f (n)) and T2(n) = O(g(n)), then


(a) T1(n) + T2(n) = O(f (n) + g(n)) (intuitively and less formally it
is O(max(f (n), g(n)))),
(b) T1(n) ∗ T2(n) = O(f (n) ∗ g(n)).

Rule 2
If T(n) is a polynomial of degree k, then T(n) = θ(nk).

Rule 3
• logk n = O(n) for any constant k.
This tells us that logarithms grow very slowly.
Rules for arithmetic with big-O symbols 38

Rule 4
If f(n) = O(g(n)), then
c * f(n) = O(g(n)) for any constant c.

Rule 5
If f1(n) = O(g (n)) but f2(n) = o(g(n)), then
f1(n) + f2(n) = O(g(n)).
Rule 6
If f(n) = O(g(n)), and g(n) = o(h(n)), then
f(n) = o(h(n)). (complexity of f o g )

These are not all of the rules, but they’re enough for most purposes.
Algorithm Complexity Analysis 39

• Three cases for which the efficiency of algorithms has to be


determined :
- worst case : is when an algorithm requires a maximum
number of steps,
- the best case : is when the number of steps is the
smallest, and
- the average case falls between these extremes.
• We define Tavg(N) and Tworst(N), as the average and worst-case
running time, resp., used by an algorithm on input of size N.
Clearly, Tavg(N) ≤ Tworst(N).
• Average-case performance often reflects typical behavior
• Worst-case performance represents a guarantee for performance
on any possible input.
• The best-case performance of an algorithm is of little interest:
does not represent the typical behavior.It is occasionally
analyzed.
40
Algorithm Complexity Analysis

Example

1. diff = sum = 0; ● Line 1 takes 2 basic steps

2. for (k=0: k < N; k++) ● in every iteration of first loop


3. sum → sum + 1; Line 3 takes 2 basic steps.
4. diff → diff - 1; Line 4 takes 2 basic steps
First loop runs N times

5. for (k=0: k < 3N; k++) ● in every iteration of second


6. sum → sum - 1; Line 6 loop takes 2 basic step

● Second loop runs for 3N times

Overall, 2 + 4N + 6N steps ( without counting the test and increment


operations for each iteration in the two loops)

This is O(N)
Algorithm Complexity Analysis 41

General Rules
Rule 1- Consecutive Statements:
This just add , which means that the maximum is that counts .

Rule 2- Complexity of a loop:


The running time of a loop is at most the running time of the statements
inside the loop (including tests) times the number of iterations.
O(Number of iterations in a loop * maximum complexity of
each iteration)

Rule 3- Nested Loops:


The running time of a group of nested loops is the running time inside a
group of nested loops multiplied by the product of the sizes of all the
loops .
Complexity of an outer loop = number of iterations in this
loop * complexity of inner loop, etc.
Algorithm Complexity Analysis 42

Example

1. sum = 0;
2. for (i=0; i < N; i++) Outer loop: N iterations
3. for (j=0; j < N; j++) Inner loop: O(N)
4. sum → sum + 1; Overall: O(N2)

1. for (i=0; i < N; i++) First loop O(N)

2. a[i] = 0; Inner loop: O(N)


3. for (i=0; i < N; i++) Outer loop: N iterations
4. for (j=0; j < N; j++) Overall: O(N) + O(N2) So
5. a[i] = a[j] + i+j ;
O(N2)
43
Algorithm Complexity Analysis
General Rules
Rule 3- If else
For the fragment
If (Condition)
S1
Else Maximum of the two complexities
S2
The running time of an if/else statement is never more than the running
time of the test plus the larger of the running times of S& and S2

If (yes)
print(1,2,….1000N)
else print(1,2,….N2) overall O(N2)

the basic strategy is analyzing from the inside (or deepest part ) out . If
there are function calls , these must be analyzed first .
Algorithm Complexity Analysis 44
Analysis of recursion
• If the recursion is really just a for loop , the analysis is usually trivial
.

Long factorial (int n) {


if (n <= 1) O(N)
return 1;
else
return n*factorial(n – 1);
}

• However, if the recursion is properly used . The analysis will involve


a recurrence relation.
Algorithm Complexity Analysis 45
Analysis of recursion
• Suppose we have the following code :
Long fib (int n) {
1. if (n <= 1)
2. return 1;
else
3. return fib(n – 1) + fib(n – 2);
}
Let T(N) be the running time for the function call fib(n)
if N= 0 or N=1 T(0) = T(1) = O(1)

if n>=2
T(n) = cost of constant op at line 1 + cost of line 3 work

T(n) = 1 op + (addition + 2 function calls)


Algorithm Complexity Analysis 46
Analysis of recursion
T(n) = 1 op + (addition + cost of fib(n-1) + cost fib(n-2))
Thus ,
T(n) = T(n-1) + T(n-2) + 2
Since fib(n) = fib(n-1) + fib(n-2)
it is easy to show by induction that :
T(n) >= fib(n)
• we have showed (in chapter 1) that fib(n) <(5/3)n
• a similar proof shows for n>4, fib(n) >= (3/2)n
thus T(n) >= (3/2)n and so
the running time of the programme grows exponentially.
This program is slow because there is a huge amount of
redundant work being performed.

By using an array and a for loop,


the programme running time can be reduced substantially.
Algorithm Complexity Analysis 47

Maximum Subsequence Problem


Given an array of N elements A1, A2, A3, …., AN, (possibly negative)

find the maximum value of

Need to find i, j such that the sum of all elements


between the ith and jth positions is maximum for all such sums

(for convenience, the maximum subsequence sums is 0 if all integers are negative)

Example
for the input -2, 11,-4,13,-5,-2 the answer is 20

We will discuss four algorithms to solve it, their performance varies :


O(N), O(Nlog N), O(N2), O(N3)
Running time of 4 algorithms for max 48

subsequence sum

( seconds)

Figure [textbook Weiss, Figure 2.2]


Maximum Subsequence Problem 49

Algorithm 1
50

Complexity of Algorithm 1

Because constants do not matter, the runtime is obtained from the


sum :

We have

inner loop

Outer Loop

Overall: O(N3)
51

Analysis of Algorithm 1
in Algorithm 1 can be made more efficient leading to O(N2).
Thus , the cubic running time can be avoid by removing the innermost
for loop, because :
52
Maximum Subsequence Problem
Algorithm 2
/**
* Quadratic maximum contiguous subsequence sum algorithm.
*/
int maxSubSum2( const vector<int> & a )
{
int maxSum = 0;
for( int i = 0; i < a.size( ); ++i )
{
int thisSum = 0;
for( int j = i; j < a.size( ); ++j )
{
thisSum += a[ j ];

if( thisSum > maxSum )


maxSum = thisSum;
}
}
return maxSum;}
53

Complexity of Algorithm 2

the runtime of algorithm2 is obtained from the two for loops :

O(N2 )
54

Maximum Subsequence Problem


Algorithm 3
Divide and Conquer

Divide-and- conquer strategy :

❖ Split the big problem into “two” small sub-problems,

❖ Solve each of them efficiently,

❖ Combine the “two” solutions.


Maximum Subsequence Problem 55

Algorithm 3
Divide and Conquer

❖ Divide the array into two parts: left part, right part each to
be solved recursively

❖ The maximum subsequence can be in one of three places :


- completely in the left half ,
- or completely in right half
- or it crosses the middle and is both halves.

➔ The first two cases can be solved recursively


➔ The last case, can be obtained by finding the max
subsequence in the left ending at the last element and
the max subsequence in the right starting from the
center (i.e. the first element in the second half )
56
Maximum Subsequence Problem
Algorithm 3

Example
First half Second half
4 –3 5 –2 -1 2 6 -2
Max subsequence sum for first half = 6 (elements A1–A3)
second half = 8 (elements A5–A7)
Max subsequence sum for first half ending at the last
element (4th elements included) is 4 (elements A1–A4)
Max subsequence sum for second half starting at the first
element (5th element included) is 7 (elements A5–A7)
Max subsequence sum spanning the middle is 4 + 7 = 11
Max subsequence spans the middle
Maximum Subsequence Problem 57
Algorithm 3 : divide and conquer
58
Complexity analysis
Algorithm 3 59

Let T(N) be the time it takes to solve a maximum subsequence


sum problem of size N.
• If N=1; lines 8 to 12 executed; taken to be one unit : T(1) = 1

• N>1: 2 recursive calls, 2 for loops, some bookkeeping ops (e.g. lines
14, 34)

– The 2 for loops (lines 19 to 32): clearly O(N)

– Lines 8, 14, 18, 26, 34: constant time; ignored compared to O(N)

– Recursive calls made on half the array size each :

2 * T(N/2)

SO: programme time is :

2 * T(N/2) + O(N) with T(1) =1


Complexity analysis for 60

Algorithm 3

T(1)=1
T(n) = 2T(n/2) + cn
= 2.(cn/2 + 2T(n/4) )+ cn
= 4T(n/4) + 2cn
= 8T(n/8) + 3cn
=…………..
= 2iT(n/2i) + icn
=………………… (reach a point when n = 2i i=log n
= n.T(1) + c n log n
= n + c n logn = O(n logn)
Complexity analysis 61
Algorithm 4

T(N) = O(N) Obvious!


but the logic of the algorithm.
is not obvious ???
Complexity analysis 62

Binary Search
• Given an integer X and integers A0,A1,A2, …..,An-1 which
are presorted.
• find i such that Ai = X, or
• return i= -1 if X is not in the input.

Solution 1
➔ Scanning through the list from left to right. Runs in linear
time .
➔ this algorithm does not take advantage of the fact that the
list is sorted .
Solution 2 (better)
➔ Check if X is the middle. If so, the answer is found .
➔ If X < the middle , we can apply the same strategy to
the sorted subarray to the left;
➔ likewise, if X > middle, we look to the right half.
Complexity analysis 63
Binary Search
Algorithm 1
64

Algorithm 2
Search(num, A[],left, right)
{
if (left = right)
{
if (A[left ]=num) return(left) and exit;
else conclude NOT PRESENT and exit;
}
center =⎣ (left + right)/2⎦;
If (A[center] < num)
Search(num, A[], center + 1, right);
If (A[center]>num)
Search(num, A[], left, center );
If (A[center]=num) return(center) and exit;
}
Complexity analysis 65

Binary Search
Algorithm 1
work done inside the loop takes O(1) per iteration
number of iterations ?
The number of iterations continues until the search space is
reduced to 1 (or the target is found). The relationship can be
described by:
n,n/2,n/4,…,1
The number of iterations needed to reduce n to 1 is log2n.

thus, the running time of Algo 1 is O(log n)

Algorithm 2
T(n) = T(n/2) +C

the running time of Algorithm 2 is O(log n)


Complexity analysis 66

divide and conquer


Master Theorem
Used to calculate time complexity of divide-and-conquer algorithms.
It applies to recurrence relations of the form:
T(n) = aT(n/b) + f(n)
where
– n is the size of the input;
– a is the number of subproblems in the recursion;
– n/b is the size of each subproblem (all assumed to have the same size);
– f(n): cost of work done outside recursive calls.
n/b might not be an integer, but replacing T(n/b) with ⌈ T(n/b)
or ⌊T(n/b)⌋ does not affect the asymptotic behavior of the recurrence.
Complexity analysis 67
divide and conquer
Master Theorem “Basic Form”

The master theorem compares the function to the


function f(n).

➔ Intuitively, if is larger (by a polynomial factor),


then the solution is

➔ if f(n) is larger (by a polynomial factor), then the solution


is T(n) = Θ(f(n))

➔ If they are the same size, then we multiply by a


logarithmic factor.
Complexity analysis 68

divide and conquer


Master Theorem “Basic Form”
Master Theorem
69

These cases are not exhaustive–


➔ it is possible for f(n) to be asymptotically larger than ,
but not larger by a polynomial factor (no matter how small the
exponent in the polynomial is).

For example, this is true when :

➔ In this situation, the basic master theorem would not apply.


If you need to solve this recurrence, you'd either have to use an
the advanced version of the Master Theorem, or apply
another method such as the recursion tree or substitution
method
Complexity analysis 70

divide and conquer


Basic Form of Master Theorem

Examples
Complexity analysis 71

divide and conquer


Master Theorem
Example 1
Complexity analysis 72

divide and conquer


Basic Form of the Master Theorem
Example 2
Complexity analysis 73

divide and conquer


Master Theorem
Example 3
Complexity analysis 74

divide and conquer


Basic Form of Master Theorem
Example 4
Complexity analysis 75

Recursion

There are three methods for solving recurrences—that is, for obtaining
asymptotic “Θ” or “O” bounds on the solution:

- In the substitution method, we guess a bound and then use


mathematical induction to prove our guess correct.

- The recursion-tree method converts the recurrence into a tree whose


nodes represent the costs incurred at various levels of the recursion.
We use techniques for bounding summations to solve the
recurrence.
- The basic master theorem used to solve three cases of
recurrences. In addition, the advanced master theorem which
extends the basic version to handle more complex recurrences
that may involve multiple terms or non-polynomial functions.
This version allows for more flexibility in analyzing algorithms
that do not fit neatly into the basic cases.

You might also like