0% found this document useful (0 votes)
2 views107 pages

AC Unit1

The document provides an overview of algorithms, defining them as well-defined computational procedures that transform inputs into outputs. It discusses key characteristics and properties of algorithms, including finiteness, definiteness, input/output requirements, and effectiveness. Additionally, it covers algorithm analysis, including time and space complexity, and introduces various notations for measuring algorithm performance.

Uploaded by

sid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views107 pages

AC Unit1

The document provides an overview of algorithms, defining them as well-defined computational procedures that transform inputs into outputs. It discusses key characteristics and properties of algorithms, including finiteness, definiteness, input/output requirements, and effectiveness. Additionally, it covers algorithm analysis, including time and space complexity, and introduces various notations for measuring algorithm performance.

Uploaded by

sid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

Unit I

Introduction
Algorithm
An algorithm is any well-defined computational
procedure that take some value, or set of values,
as input and produces some value, or set of
values, as output. An algorithm is thus a
sequence of computational steps that transform
the input into the output.
Algorithmic Solution
• With the definition, we can identify five
important characteristics of algorithms :
• Algorithms are well-ordered.
• Algorithms have unambiguous operations.
• Algorithms have effectively computable
operations.
• Algorithms produce a result.
• Algorithms halt in a finite amount of time.
Properties of an Algorithm

– Finiteness: - an algorithm terminates after a finite numbers


of steps.
– Definiteness: - each step in algorithm is unambiguous.
This means that the action specified by the step cannot be
interpreted (explain the meaning of) in multiple ways &
can be performed without any confusion.
– Input:- an algorithm accepts zero or more inputs
– Output:- it produces at least one output.
– Effectiveness:- it consists of basic instructions that are
realizable. This means that the instructions can be
performed by using the given inputs in a finite amount of
time.
Top Down Design

Main Task

subtask1 subtask2 subtask3


Write an algorithm to find the largest of a set of
numbers. You do not know the number of
numbers.
FindLargest
Input: A list of positive integers
1. Set Largest to 0
2. while (more integers)
2.1 if (the integer is greater than Largest)
then
2.1.1 Set largest to the value of the integer
End if
End while
3. Return Largest
End
Distinct Areas of Study of Algorithms
Devise an Algoritm
Find good algorithms
Validate a problem
Correct answers for all legal inputs
Analyze an Algorithm
Computational time and memory requirements.
Test a Program
Debugging and profiling
Algorithm Specification
Comments
//
Blocks
{ }
Identifier
Starts with letter
Assignment
Id:=4
Boolean Values
True and false
Multidimentional array
A[i,j]. Starts with zero index
Input and Output
Use instructions as read and write
Algorithm Specification
Selection (test)
If <condition> then task1
If <condition> then task1 else task2
Multiple cases
Algorithm Specification

Repetition

• While/Repeat
While (cond) do

end while

• Do while /Repeat
Do

while (cond)
Algorithm Specification

One procedure

example
Example: Selection Sort Algorithm
First Attempt :
Find the smallest from unsorted list and place next
to sorted list

Smallest element is a[j], then Interchange a[i], a[j]


Selection Sort
Recursive Algorithms
Direct Recursion
Function Calling itself
Indirect Recursion
Function A Calling B and B calling A again.

Example: Binomial Theorem


Recursive Algorithms
Examples
Towers of Honoi
Time and Space Complexity
Time complexity
Time complexity of an algorithm quantifies the amount
of time taken by an algorithm to run as a function of the
length of the input.
Space complexity
Space complexity of an algorithm quantifies the amount of
space or memory taken by an algorithm to run as a
function of the length of the input.
Count method to calculate time
complexity
introduce a new variable, count, into a
program by identifying Program Step and by
Increment the value of count by appropriate
amount with respect to a statement in the
original program executes.
Program Step - A synthetically meaningful
segment of a program that requires execution
time which is independent on the instance
characteristics.
Count method to calculate time
complexity…
The number of steps in any program depends on the
kind of statements. For example –
– Comments count as zero steps
– An assignment statement which does not involves any
calls to other algorithm is counted as one step.
– For looping statements the step count equals to the
number of step counts assignable to goal value
expression. And should be incremented by one within a
block and after completion of block.
– For conditional statements the step count should
incremented by one before condition statement.
– A return statement is counted as one step and should
be write before return statement.
For example –
Algorithm Sum(a,n)
{ s:=0;
for i:=1 to n do
{
s:=s+a[i];
}
return s;
}
Algorithm Sum(a,n) // After Adding Count
{ s:=0;
count:=count+1; // for assignment statement execution
for i:=1 to n do
{
count:=count+1; //for For loop Assignment
s:=s+a[i];
count:=count+1;// for addition statement execution
}
count:=count+1; // for last time of for
count:=count+1; // for the return
return s;
}
Algorithm Sum(a,n) //Simplified version for algorithm
Sum
{
for i:=1 to n do
{
count:=count+2;
}
count:=count+3;
}
Form above example,
Total number of program steps= 2n + 3, where n is
the loop counter.
Algorithm RSum(a,n)
{
if n ≤ 0 then
return a[n];
else
return RSum(a, n-1) + a[n];
}
Algorithm RSum(a,n)
{
count:=count + 1; // for the if condition
if n ≤ 0 then
{
count:=count + 1;// for the return statement
return a[n];
}
else
{
count:=count + 1; // for the addition, function invoked
& return
return RSum(a, n-1) + a[n];
}
}
Therefore we can write,
• tRSum(n) = 2 if n=0 and
• tRSum(n) = 2+ tRSum(n-1) if n>0
= 2+ 2+ tRSum(n – 2)
= 2(2)+ tRSum(n – 2)
=3 (2) + tRSum(n – 3)
:
:
= n(2) + tRSum(0)
=2n+2
So, the step count for RSum algorithm is 2n+2.
Algorithm RSum(a,n) //Simplified version of algorithm
Rsum with counting’s only
{
count:=count + 1;
if n ≤ 0 then
Count:=count + 1;
else
count:=count + 1;
}
Prof
Table method to calculate time
complexity
build a table in which we list the total number of
steps contributed by each statement. This table
contents three columns –
Steps per execution (s/e)- contents count value by
which count increases after execution of that
statement.
Frequency – is the value indicating total number of
times statement executes
Total steps – can be obtained by combining s/e and
frequency.
Total step count can be calculated by adding total
steps contribution values.
Statement s/e Frequency Total steps
Algorithm Sum(a,n) 0 1 0
{ 0 1 0
s:=0; 1 1 1
for i:=1 to n do 1 n+1 n+1
s:=s+a[i]; 1 n n
return s; 1 1 1
} 0 1 0

Total step count = 2n+3


Analyzing Algorithm
Phases of Analysis
Priori Analysis
• The bounds of time are obtained by formalating a function
based on theroy.
• Independent of programming languages and machine
structures. Ex, O-notation.
Posteriori Analysis
• Depends on programming language and machine structure.
• Time and space is recorded during execution.
• More is the number of input more is the time taken.
Ex. insertion sort
Types of Analysis
Worst case
Provides an upper bound on running time
An absolute guarantee that the algorithm would not run longer, no
matter what the inputs are
Best case
Provides a lower bound on running time
Input is the one for which the algorithm runs the fastest
Lower Bound  Running Time Upper Bound
Average case
Provides a prediction about the running time
Assumes that the input is random
How do we compare algorithms?
We need to define a number of objective
measures.
Compare execution times?
• Not good: times are specific to a particular
computer !!
Count the number of statements executed?
• Not good: number of statements vary with the
programming language as well as the style of the
individual programmer.

31
Ideal Solution

Express running time as a function of the


input size n (i.e., f(n)).
Compare different functions corresponding to
running times.
Such an analysis is independent of machine
time, programming style, etc.

32
Example
Associate a "cost" with each statement.
Find the "total cost“ by finding the total number of times
each statement is executed.
Algorithm 1 Algorithm 2

Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
... ...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2

33
Another Example
Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2

34
Asymptotic Notation

O notation: asymptotic “less than”:

f(n)=O(g(n)) implies: f(n) “≤” g(n)

 notation: asymptotic “greater than”:

f(n)=  (g(n)) implies: f(n) “≥” g(n)

 notation: asymptotic “equality”:

f(n)=  (g(n)) implies: f(n) “=” g(n)

35
Big-O Notation

We say fA(n)=30n+8 is order n, or O (n)


It is, at most, roughly proportional to n.
fB(n)=n2+1 is order n2, or O(n2). It is, at most,
roughly proportional to n2.
In general, any O(n2) function is faster-
growing than any O(n) function.

36
More Examples …

n4 + 100n2 + 10n + 50 is O(n4)


10n3 + 2n2 is O(n3)
n3 - n2 is O(n3)
constants
10 is O(1)
1273 is O(1)

37
Visualizing Orders of Growth
On a graph, as
you go to the
right, a faster

Value of function 
fA(n)=30n+8
growing
function
eventually
fB(n)=n2+1
becomes
larger...
Increasing n 

38
Back to Our Example
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2

Both algorithms are of the same order: O(N)

39
Example (cont’d)

Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2 = O(N2)

40
Review: Asymptotic Performance
Asymptotic performance: How does algorithm
behave as the problem size gets very large?
• Running time
• Memory/storage requirements
Remember that we use the RAM model:
• All memory equally expensive to access
• No concurrent operations
• All reasonable instructions take unit time
– Except, of course, function calls
• Constant word size
– Unless we are explicitly manipulating bits
Review: Running Time
Number of primitive steps that are executed
Except for time of executing a function call most
statements roughly require the same amount of
time We can be more exact if need be
RR
O( f ) ( f )
•f
( f )
Asymptotic Notations
Allow us
to analyze an algorithm’s running time by
identifying its behavior as the input size for the
algorithm increases.
This is also known as an algorithm’s growth
rate.

Order of Growth classification


Asymptotic Notations
Constant
No matter the size of the data it receives, the algorithm takes
the same amount of time to run. We denote this as a time
complexity of O(1).
Linear
The running duration of a linear algorithm is constant. It will
process the input in n number of operations O(n).
Quadratic
two nested loops, or nested linear operations, the algorithm
process the input n^2 times.
Logarithmic
A logarithmic algorithm is one that reduces the size of the
input at every step. We denote this time complexity as O(log
n). Example : binary search algorithm
Asymptotic Notations

• Quasilinear
• the time complexity O(n log n) can describe a data structure
where each operation takes O(log n) time. example :quick sort,
a divide-and-conquer algorithm.
• Non-polynomial time complexity
• An algorithm with time complexity O(n!) often iterates through
all permutations of the input elements. example brute-force
search seen in the travelling salesman problem
• Exponential
• An exponential algorithm often also iterates through all subsets
of the input elements. It is denoted O(2n). The larger the data
set, the more steep the curve becomes. a brute-force attack.
Order of Growth classification
Asymptotic Notations- Big O Notation
{f(n) = O(g(n))}

f(n) <= cg(n) for all n >= n0


there exist positive constants c>0 and n0 >=1

Example:

f(n)=3n+5 f(n)=27n2+16n f(n)=


2n3+6n2+2n

3n+5≤ 27n2+16n ≤ 27n2+n2


3n+n≤4n {n≤n2}
27n2+16n ≤ 28n2
n≥5
f(n) =O(n) f(n) =O(n2)
Asymptotic Notations- Big Omega (Ω)Notation
{f(n)} = Ω (g(n))

cg(n) <= f(n) for all n >= n0


there exist positive constants c>0 and n0 >=1

Example:

f(n)=3n+5 f(n)=27n2+16n f(n)=


2n3+6n2+2n

3n≤ 3n+5 27n2 ≤ 27n2+16n

f(n) =Ω (n) f(n) =Ω (n2)


Asymptotic Notations- Big Theta (Θ)Notation
{f(n)} = Θ (g(n))

c1g(n) <= f(n) <= c2g(n) for all n >= n0


there exist positive constants c1,c2>0 and
n0 >=1
Example:
f(n)=3n+5 f(n)=27n2+16n f(n)=
2n3+6n2+2n

3n≤ 3n+n≤4n 27n2+16n ≤ 27n2+n


C1=3 & {n≤n2}
27n2 ≤ 27n2+16n ≤
c2=4 28n2
f(n) =Θ (n) f(n) =Θ (n2)
How to Determine Complexities
In general, how can you determine the running time of a piece
of code? The answer is that it depends on what kinds of
statements are used.
Sequences of Statements
Statement 1; Total Time = time(statement1) +
Statement 2; time(statement2) +… ……………………. +
……… Complexity time(statement k)
Statement k;

If – then-else statement
If (Condition) { Here, either sequence 1 will execute, or
Sequences of sequence 2 will execute.
statements 1 Therefore, the worst-case time is the
Complexity
} slowest of the two possibilities:
Else { max(time(sequence 1), time(sequence
Sequences of 2))
Statement 2
}
How to Determine Complexities
for loops The loop executes N times, so the sequence
for (i = 0; i < N; i++) { of statements also executes N times.
{ Complexity which is O(N) overall.
sequence of
statements
}
}
Nested loops
The outer loop executes N times. Every
for (i = 0; i < N; i++) { time the outer loop executes, the inner
for (j = 0; j < M; j++) { loop executes M times
Complexity Thus, the complexity is O(N * M)
sequence of statements
}
}

x = 0;
A[n] = some array of The loop executes N times, so the
length n; sequence of statements also
while (x != A[i]) Complexity executes N times.
{ which is O(N) overall.
i++;
}
Asymptotic notations
O-notation

52
Examples – (‘=’ symbol readed as ‘is’ instead of ‘equal’)
The function 3n+2 = O(n) as 3n+2 ≤ 4n for all n≥2
The function 3n+3 = O(n) as 3n+3 ≤ 4n for all n≥3
The function 100n+6 = O(n) as 100n+6 ≤ 101n for all
n≥6
The function 10n2+4n+2 = O(n^2) as 10n^2+4n+2 ≤
11n2 for all n≥5
The function 1000n2+100n-6 = O(n2) as 1000n2+100n-
6 ≤ 1001n2 for all n≥100
The function 6*2n+n2 = O(2n) as 6*2n +n2≤ 7*2n for
all n≥4
The function 3n+3 = O(n2) as 3n+3 ≤ 3n2 for all n≥2
Tabular Method
n Function f(n) compare c. g(n)
10n^2+4n+2 11n^2
1 10+4+2=16 > 11
2 40+8+2=50 > 44
3 90+12+2=104 > 99
4 160+16+2=178 > 176

5 250+20+2=272 < 275


6 360+24+2=386 < 396
Consider the job offers from two companies. The
first company offer contract that will double the
salary every year. The second company offers you
a contract that gives a raise of Rs. 1000 per year.
This scenario can be represented with Big-O
notation as –
For first company, New salary = Salary X 2^n (where n
is total service years)
Which can be denoted with Big-O notation as O(2^n)
For second company, New salary = Salary +1000n
(where n is total service years)
Which can be denoted with Big-O notation as O(n)
O(1)
Describes an algorithm that will always execute in the
same time (or space) regardless of the size of the input
data set i.e. a computing time is a constant time.
int IsFirstElementNull(char String[])
{
if(strings*0+ == ‘\0’)
{
return 1;
}
return 0;
}
O(N): is called linear time,
Describe an algorithm whose performance will grow linearly and in
direct proportion to the size of the input data set.
int ContainsValue(char String[], int no, char ch)
{
for( i = 0; i < no; i++)
{
if(string[i] == ch)
{
return 1;
}
}
return 0;
}
( N K ) : (k fixed) refers to polynomial time; (if k=2, it is
called quadratic time, k=3, it is called cubic time),
which represents an algorithm whose performance is directly proportional
to the square of the size of the input data set.
This is common with algorithms that involve nested iterations over the data
set. Deeper nested iterations will result in ( N 3 ), ( N 4 ) etc.
bool ContainsDuplicates(String[] strings)
{
for(int i = 0; i < strings.Length; i++)
{
for(int j = 0; j < strings.Length; j++)
{
if(i == j) // Don't compare with self
continue;
if(strings[i] == strings[j])
return true;
}
return false;
}
}
(2 N ) : is called exponential time,
Denotes an algorithm whose growth will
double with each additional element in the
input data set.
The execution time of an  (2 N
) function will
quickly become very large.
Big-O Visualization
O(g(n)) is the set of
functions with smaller
or same order of
growth as g(n)

60
An Example: Insertion Sort
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

61
Insertion Sort
What is the precondition
InsertionSort(A, n) {
for this loop?
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

62
Insertion Sort
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
} How many times will
} this loop execute?

David Luebke

63
Insertion Sort
Statement Effort
InsertionSort(A, n) {
for i = 2 to n { c1 n
key = A[i] c2(n-1)
j = i - 1; c3(n-1)
while (j > 0) and (A[j] > key) { c4 T
A[j+1] = A[j] c5(T-(n-1))
j = j - 1 c6(T-(n-1))
} 0
A[j+1] = key c7(n-1)
} 0
}
T = t2 + t3 + … + tn where ti is number of while expression evaluations for the ith for loop
iteration
David Luebke

64
Analyzing Insertion Sort
T(n) = c1n + c2(n-1) + c3(n-1) + c4T + c5(T - (n-1)) + c6(T - (n-1)) + c7(n-1)
= c8T + c9n + c10
What can T be?
Best case -- inner loop body never executed
• ti = 1  T(n) is a linear function
Worst case -- inner loop body executed for all
previous elements
• ti = i  T(n) is a quadratic function
Average case
• ???

David Luebke

65
Upper Bound Notation
We say InsertionSort’s run time is O(n2)
Properly we should say run time is in O(n2)
Read O as “Big-O” (you’ll also hear it as “order”)
In general a function
f(n) is O(g(n)) if there exist positive constants c
and n0 such that f(n)  c  g(n) for all n  n0
Formally
O(g(n)) = { f(n):  positive constants c and n0 such
that f(n)  c  g(n)  n  n0
David Luebke

66
Insertion Sort Is O(n2)
Proof
Suppose runtime is an2 + bn + c
• If any of a, b, and c are less than 0 replace the constant
with its absolute value
an2 + bn + c  (a + b + c)n2 + (a + b + c)n + (a + b + c)
 3(a + b + c)n2 for n  1
Let c’ = 3(a + b + c) and let n0 = 1
Question
Is InsertionSort O(n3)?
Is InsertionSort O(n)?
David Luebke

67
Omega notation (Ω)

specifically describes the best-case scenario, and


can be used to describe the minimum execution
time (asymptotic lower bounds) required or the
space used (e.g. in memory or on disk) by an
algorithm.
The function f (n) = Ω(g(n)) ( read as ‘f of n is said
to be Omega of g of n’) if and only if there exists a
real, positive constant C and a positive integer n0
such that, f (n) ≥ C*g(n) for all n≥ n0
Here, n0 must be greater than 0.
Asymptotic notations (cont.)
 - notation

(g(n)) is the set of functions


with larger or same order of
growth as g(n)

69
Examples –
• The function 3n+2 = Ω(n) as 3n+2 ≥ 3n for all n≥1
• The function 3n+3 = Ω(n) as 3n+3 ≥ 3n for all n≥1
• The function 100n+6 = Ω(n) as 100n+6 ≥ 100n for all n≥1
• The function 10n2+4n+2 = Ω(n2) as 10n2+4n+2 ≥ n2 for all
n≥1
• The function 6*2n+n2 = Ω(2n) as 6*2n +n2 ≥ 2n for all n≥1
• The function 3n+3 = Ω(1)
• The function 10n2+4n+2 = Ω(n)
• The function 10n2+4n+2 = Ω(1)
Theta notation (Θ)

Theta specifically describes the average-case


scenario, and can be used to describe the average
execution time required or the space used by an
algorithm.
A description of a function in terms of Θ notation
usually only provides an tight bound on t
The function f (n) = Θ(g(n)) ( read as ‘f of n is said
to be Theta of g of n’) if and only if there exists a
positive constants C1, C2, and a positive integer
n0 such that, C1*g(n) ≤ f (n) ≤ C2*g(n) for all n≥n0
Asymptotic notations (cont.)
-notation

(g(n)) is the set of functions


with the same order of growth
as g(n)

72
Examples
• The function 3n+2 = Θ(n) as
3n+2 ≥ 3n for all n≥2 and
3n+2 ≤4n for all n≥2
So, c1=3, c2=4 and n0=2.
• The function 3n+3 = Θ(n)
• The function 10n^2+4n+2 = Θ (n^2)
• The function 6 * 2^n +n^2 = Θ(2^n)
Next Lecture
Recurrence Relations
Recurrence Relations
Definition: Given a recursive algorithm a
recurrence relation for the algorithm is an
equation that gives the run time on an input
size in terms of the run times of smaller input
sizes.
When iterative formulas for T(n) are difficult
or impossible to obtain, one can use either
a recursion tree method,
an iteration method , or
a substitution method with Induction to get T(n) or
a bound U(n)of T(n) , where T(n)= Θ(U(n)).
Recursion Tree
A recursion tree is a tree generated by tracing
the execution of a recursive algorithm.
Recurrence Relations
A recurrence relation for the sequence { an } is
an equation that expresses an in terms of one
or more of the previous terms of the
sequence, namely, a0 , a1 ,...an for all integers
n with n ≥ n0 , where n0 is a nonnegative
integer. A sequence is called a solution of a
recurrence relation if its terms satisfy the
recurrence relation.
Let {a } be a sequence that satisfies the
n

recurrence relation a  a  a for n = 1, 2, 3, . . . ,


n n 1 n2

and suppose that a  2 . What are a1 , a2 , and a3 ?


0

Solution: We see from the recurrence relation that


a1  a0  3  2  3  5.
a2  5  3  8
and
a3  8  3  11
Let {a } be a sequence that satisfies the
n

recurrence relation an  an1  an2 for n = 2, 3, 4,


. . . , and suppose that a  3 and a  5 . What
0 1

are a2 , and a3 ?
Solution: We see from the recurrence relation
that a2  a1  a0  5  3  2
and a3  a2  a1  2  5  3
We can find a4, a5, and each successive term in
a similar way.
Creating Recurrence Relation
Fib(a) T(n)
{
if(a==1 || a==0) 1
return 1;
return Fib(a-1) + Fib(a-2); T(n-1)+T(n-2)
}
(comparison, comparison, addition) and also
calls itself recursively.
The Fibonacci sequence, f0, f1, f2, . . . , is defined by the
initial conditions f0 = 0, f1 = 1, and the recurrence
relation fn = fn−1 + fn−2 for n = 2, 3, 4, . . . .
Find the Fibonacci numbers f2, f3, f4, f5, and f6.
Solution:. Because the initial conditions tell us that
f0 = 0 and f1 = 1, using the recurrence relation in
the definition we find that
f2 = f1 + f0 = 1 + 0 = 1,
f3 = f2 + f1 = 1 + 1 = 2,
f4 = f3 + f2 = 2 + 1 = 3,
f5 = f4 + f3 = 3 + 2 = 5,
f6 = f5 + f4 = 5 + 3 = 8.
Solving Recurrence Relations

A recurrence relation is an equation that


recursively defines a sequence where the
next term is a function of the previous terms
(Expressing Fn as some combination of Fi
with i<n).
Recurrence Relations
Substitution Method
The substitution method for solving recurrences
comprises two steps:
• Guess the form of the solution.
• Use mathematical induction to find the constants and
show that the solution works.
We substitute the guessed solution for the
function when applying the inductive hypothesis
to smaller values; hence the name “substitution
method.”
This method is powerful, but we must be able to
guess the form of the answer in order to apply it.
Example
We guess that the solution is The
substitution method requires us to prove that
for an appropriate choice of the constant c > 0. We start
by assuming that this bound holds for all positive m < n,
in particular for , yielding

Substituting into the recurrence yields

where the last step holds as long as


{ 3T(n-1), if n>0,
T(n) = { 1, otherwise
Let us solve using substitution.
T(n) = 3T(n-1) = 3(3T(n-2))
= 32T(n-2)
= 33T(n-3) ... ...
= 3nT(n-n)
= 3nT(0)
= 3n
This clearly shows that the complexity of this
function is O(3n).
How to solve linear recurrence relation
Homogeneous Recurrence Relations
Fn  AFn1  BFn2
Suppose, a two ordered linear recurrence relation is
where A and B are real numbers.
The characteristic equation for the above recurrence relation is −
x 2  Ax  B  0
Three cases may occur while finding the roots −
Case 1 − If this equation factors as ( x  x1 )( x  x1 )  0
x x
and it produces two distinct real roots 1 and 2 , then n F  ax1
n
 bx2
n

is the solution. [Here, a and b are constants]


Case 2 − If this equation factors as ( x  x1 )2
and it produces single real root x1 , then F  ax n  bnx n
is the solution. n 1 1

Case 3 − If the equation produces two distinct complex roots, x1


and x2 in polar form x1  r and x  r() , then
2
following is the solution
Fn  r n (a cos(n )  b sin(n ))
Problem 1
Solve the recurrence relation Fn  5Fn1  6Fn2
where F  1 and F  4
0 1

Solution
The characteristic equation of the recurrence relation
is − x  5 x  6  0
2

So, ( x  3)( x  2)  0
Hence, the roots are −
x1  3
and x2  2
The roots are real and distinct. So, this is in the form of
case 1
Hence, the solution is −
Fn  ax  bx
n
1
n
2
Here, F  a3  b2
n n
(As x1=3 and x2=2)
Therefore,
1  F0  a30  b20  a  b
4  F1  a31  b21  3a  2b

Solving these two equations, we get a=2 and


b=−1
Hence, the final solution is −
Fn  2.3n  (1).2n  2.3n  2n
Problem 2
Solve the recurrence relation Fn  10Fn1  25Fn2
where F0  3 and F1  17
Solution
The characteristic equation of the recurrence relation is −
x 2  10 x  25  0
So ( x  5) 2  0
Hence, there is single real root x1  5
As there is single real valued root, this is in the form of case 2
Hence, the solution is −
Fn  ax1n  bnx1n
3  F0  a.50  b.0.50 & 17  F1  a.51  b.1.51

Solving these two equations, we get a=3


and b=2/5
Hence, the final solution is F  3.5n
n  (2 / 5).n.2n
Problem 3
Solve the recurrence relation Fn  2Fn1  2Fn2
where F0  1 and F1  3
Solution
The characteristic equation is x 2
 2x  2x  0
Hence, the roots are − x1  1  i and x2  1  i
In polar form, x1  r And x2  r( )
where r  2 and  
4

The roots are imaginary. So, this is in the form of case 3.


Hence, Fn  ( 2) n (a cos(n.  )  b sin(n.  ))
4 4
1  F0  ( 2)0 (a cos(0.  )  b sin(0.  ))  a
4 4
3  F1  ( 2)1 (a cos(1.  )  b sin(1.  ))  2(a / 2  b 2)
4 4
Solving these two equations we get a=1 and b=2
Hence, the final solution is −
Fn  ( 2) n (cos(n.  )  2sin( n.  ))
4 4
Example: Fibonacci sequence

f n  f n1  f n2 f 0  0, f1  1

Has solution: fn   r   r 1 1
n
2 2
n

1 5 1 5
Characteristic roots: r1  r2 
2 2
Konstantin Busch - LSU 91
f1  f 0 r2 1
1  
r1  r2 5

f 0 r1  f1 1
2  
r1  r2 5

Konstantin Busch - LSU 92


fn   r   r
1 1
n
2 2
n

n n
1 1 5  1 1 5 
     

5 2   2 
 5 

Konstantin Busch - LSU 93


Recursion Tree (Back Subtitution)

and the sum of effort in each level except leaves is


Recursion Tree (Back Subtitution)
for
ie.
Analysis: First we find the height of the recursion tree.
Observe that a node at depth i reflects a subproblem of size
The sub problem size hits n = 1 when
or So the tree has levels
Now we determine the cost of each level of the tree. The
number of nodes at depth is iis 3i Each node at depth
i  0,1,...log 4 n  1 has a cost of so the total cost of level i
is However, the bottom level is special.
Each of the bottom nodes contribute cost T(1), and there are
of them.
So the total cost of the entire tree is
The left term is just the sum of a geometric series. So
T(n) evaluates to

This looks complicated but we can bound it (from


above) by the sum of the infinite series

Since functions in are also in , this


whole expression is , Therefore, we can guess
that
Master Theorem
This method is useful to solve the recurrences of the
form,
This recurrence describes an algorithm that divides a
problem of size n into a subproblems, each of size
n=b, and solves them recursively.
Simplified
n
T (n)  aT     (n k log p n)
b
a  1, b  1, k  0 and p is real number
Case 1: if a  b , k
then T (n)   (nlogb a )
Case 2: if a  bk then
a. If p  1, then T (n)   (nlog a log p 1 n)
b

b. If p  1, then T (n)   (nlog a log log n)


b

c. If p  1, then T (n)   (nlogb a )


Case 3: if a  bk ,
a. If p  0 thenT ( n)   ( n k
log p
n)
b. If p  0 then T ( n)   ( n k
)
Examples

You might also like