Introduction
Introduction
Chapter
Introduction 1
The objective of this chapter is to explain the importance of the analysis of algorithms, their notations, relationships and solving as many
problems as possible. Let us first focus on understanding the basic elements of algorithms, the importance of algorithm analysis, and then
slowly move toward the other topics as mentioned above. After completing this chapter, you should be able to find the complexity of any
given algorithm (especially recursive functions).
1.1 Variables
Before going to the definition of variables, let us relate them to old mathematical equations. All of us have solved many mathematical equations
since childhood. As an example, consider the below equation:
+2 −2 =1
We don’t have to worry about the use of this equation. The important thing that we need to understand is that the equation has names ( and
), which hold values (data). That means the ( and ) are placeholders for representing data. Similarly, in computer science
programming we need something for holding data, and is the way to do that.
1.1 Variables 19
Data Structures and Algorithmic Thinking with Python Introduction
· Size of an array
· Polynomial degree
· Number of elements in a matrix
· Number of bits in the binary representation of the input
· Vertices and edges in a graph.
4 D
e
c
2
r
e
a
s
i
log n
log ( !) g
R
a
t
e
2 s
O
f
G
r
o
w
log log t
h
Below is the list of growth rates you will come across in the following chapters.
Time Complexity Name Example
1 Constant Adding an element to the front of a linked list
Logarithmic Finding an element in a sorted array
Linear Finding an element in an unsorted array
Linear Logarithmic Sorting n items by ‘divide-and-conquer’ - Mergesort
Quadratic Shortest path between two nodes in a graph
Cubic Matrix Multiplication
2 Exponential The Towers of Hanoi problem
( ) ( )
Rate of growth
Input size,
Let us see the O−notation with a little more detail. O−notation defined as O( ( )) = { ( ): there exist positive constants and such
that 0 ≤ ( ) ≤ ( ) for all ≥ }. ( ) is an asymptotic tight upper bound for ( ). Our objective is to give the smallest rate of
growth ( ) which is greater than or equal to the given algorithms’ rate of growth ( ).
Generally, we discard lower values of . That means the rate of growth at lower values of is not important. In the figure, is the point
from which we need to consider the rate of growth for a given algorithm. Below , the rate of growth could be different. is called threshold
for the given function.
Big-O Visualization
O( ( )) is the set of functions with smaller or the same order of growth as ( ). For example; O( ) includes O(1), O( ), O( ), etc.
Note: Analyze the algorithms at larger values of only. What this means is, below we do not care about the rate of growth.
Big-O Examples
Find upper bound for ( ) = 3 + 8
3 + 8 ≤ 4 , for all ≥ 8
∴ 3 + 8 = O( ) with c = 4 and =8
Find upper bound for ( ) = + 1
+ 1 ≤ 2 , for all ≥ 1
∴ + 1 = O( ) with = 2 and =1
Find upper bound for ( )= + 100 + 50
+ 100 + 50 ≤ 2 , for all ≥ 11
∴ + 100 + 50 = O( ) with = 2 and = 11
Find upper bound for ( )=2 − 2
2 − 2 ≤ 2 , for all ≥ 1
∴ 2 − 2 = O( ) with = 2 and =1
Find upper bound for ( ) =
≤ , for all ≥ 1
∴ = O( ) with = 1 and =1
Find upper bound for ( ) = 410
410 ≤ 410, for all ≥ 1
∴ 410 = O(1 ) with = 1 and =1
No Uniqueness?
There is no unique set of values for and in proving the asymptotic bounds. Let us consider, 100 + 5 = O( ). For this function there
are multiple and values possible.
100 + 5 ≤ 100 + = 101 ≤ 101 , for all ≥ 5, = 5 and = 101 is a solution.
100 + 5 ≤ 100 + 5 = 105 ≤ 105 , for all ≥ 1, = 1 and = 105 is also a solution.
( )
( ))
Rate of growth
Input size,
Ω Examples
Find lower bound for ( )=5 .
( )
Rate of growth c ( )
Input size,
This notation decides whether the upper and lower bounds of a given function (algorithm) are the same. The average running time of an
algorithm is always between the lower bound and the upper bound. If the upper bound (O) and lower bound (W) give the same result, then
the Q notation will also have the same rate of growth. As an example, let us assume that ( ) = 10 + is the expression. Then, its tight
upper bound ( ) is O( ). The rate of growth in the best case is ( ) = O( ).
In this case, the rates of growth in the best case and worst case are the same. As a result, the average case will also be the same. For a given
function (algorithm), if the rates of growth (bounds) for O and W are not the same, then the rate of growth for the Q case may not be the same.
In this case, we need to consider all possible time complexities and take the average of those (for example, for a quick sort average case, refer
to the chapter).
Now consider the definition of Q notation. It is defined as Q( ( )) = { ( ): there exist positive constants , and such that 0 ≤
( ) ≤ ( ) ≤ ( ) for all ≥ }. ( ) is an asymptotic tight bound for ( ). Q( ( )) is the set of functions with the same
order of growth as ( ).
Q Examples
Find Q bound for ( ) = −
≤ − ≤ , for all, ≥ 2
∴ − = Q( ) with = 1/5, = 1 and =2
Prove ≠ Q( )
Important Notes
For analysis (best case, worst case and average), we try to give the upper bound (O) and lower bound (W) and average running time (Q). From
the above examples, it should also be clear that, for a given function (algorithm), getting the upper bound (O) and lower bound (W) and
average running time (Q) may not always be possible. For example, if we are discussing the best case of an algorithm, we try to give the upper
bound (O) and lower bound (W) and average running time (Q).
In the remaining chapters, we generally focus on the upper bound (O) because knowing the lower bound (W) of an algorithm is of no practical
importance, and we use the Q notation if the upper bound (O) and lower bound (W) are the same.
Total time = + ∗ = O( ).
5) An algorithm is O( ) if it takes a constant time to cut the problem size by a fraction (usually by ½).
As an example, let us consider the following program:
def logarithms(n):
i=1
while i <= n:
i= i * 2
print (i)
logarithms(100)
If we observe carefully, the value of is doubling every time. Initially = 1, in next step = 2, and in subsequent steps = 4, 8 and
so on. Let us assume that the loop is executing some times. At step 2 = , and at ( + 1) step we come out of the .
Taking logarithm on both sides, gives
2 =
2=
= //if we assume base-2
Total time = O( ).
Similarly, for the case below, the worst-case rate of growth is O( ). The same discussion holds good for the decreasing sequence as
well.
def logarithms(n):
i=n
while i >= 1:
i= i // 2
print (i)
logarithms(100)
Another example: binary search (finding a word in a dictionary of pages)
· Look at the center point in the dictionary
· Is the word towards the left or right of center?
· Repeat the process with the left or right part of the dictionary until the word is found.
1
= 1 + 2 + ⋯+ ≈
+1
( ) = 7 ( /3) +
( ) = 7 ( /3) + => ( ) = Θ( ) (Master Theorem Case 3.as)
( ) = 4 ( /2) +
( ) = 4 ( /2) + => ( ) = Θ( ) (Master Theorem Case 1)
( ) = 16 ( /4) + !
( ) = 16 ( /4) + ! => ( ) = Θ( !) (Master Theorem Case 3.a)
( ) = √2 ( /2) +
( ) = √2 ( /2) + => ( ) = Θ(√ ) (Master Theorem Case 1)
( ) = 3 ( /2) +
( ) = 3 ( /2) + => ( ) = Q( ) (Master Theorem Case 1)
( ) = 3 ( /3) + √
( ) = 3 ( /3) + √ => ( ) = Θ( ) (Master Theorem Case 1)
( ) = 4 ( /2) +
( ) = 4 ( /2) + => ( ) = Q( ) (Master Theorem Case 1)
( ) = 3 ( /4) +
( ) = 3 ( /4) + => ( ) = Θ( ) (Master Theorem Case 3.a)
( ) = 3 ( /3) + /2
( ) = 3 ( /3) + /2 => ( ) = Θ( ) (Master Theorem Case 2.a)
T( ) = √ T(√ ) +
≥ √ . √ √ +
= . √ +
= . . . +
≥
The last inequality assumes only that 1 ≥ . . . This is incorrect if is sufficiently large and for any constant . From the above proof,
we can see that our guess is incorrect for the lower bound.
From the above discussion, we understood that Θ( ) is too big. How about Θ( )? The lower bound is easy to prove directly:
T( ) = √ T(√ ) + ≥
Now, let us prove the upper bound for this Θ( ).
T( )= √ T(√ ) +
≤ √ . .√ +
= . +
= ( + 1)
≰
From the above induction, we understood that Θ( ) is too small and Θ( ) is too big. So, we need something bigger than and smaller
than . How about ?
Proving the upper bound for :
T( ) = √ T(√ ) +
≤ √ . .√ √ +
= . . √ +
√
≤ √
Proving the lower bound for :
T( ) = √ T(√ ) +
≥ √ . .√ √ +
= . . √ +
√
≱ √
The last step doesn’t work. So, Θ( ) doesn’t work. What else is between and ? How about ?
Proving upper bound for :
T( ) = √ T(√ ) +
≤ √ . .√ √ +
= . . - . +
≤ , if ≥ 1
Proving lower bound for :
T( ) = √ T(√ ) +
≥ √ . .√ √ +
= . . - . +
≥ , if ≤ 1
From the above proofs, we can see that T( ) ≤ , if ≥ 1 and T( ) ≥ , if ≤ 1. Technically, we’re still missing the
base cases in both proofs, but we can be fairly confident at this point that T( ) = Θ( ).
The general approach is to assign an artificial cost to each operation in the sequence, such that the total of the artificial costs for the sequence
of operations bounds the total of the real costs for the sequence. This artificial cost is called the amortized cost of an operation. To analyze
the running time, the amortized cost thus is a correct way of understanding the overall running time — but note that particular operations can
still take longer so it is not a way of bounding the running time of any individual operation in the sequence.
When one event in a sequence affects the cost of later events:
· One particular task may be expensive.
· But it may leave data structure in a state that the next few operations become easier.
: Let us consider an array of elements from which we want to find the smallest element. We can solve this problem using sorting.
After sorting the given array, we just need to return the element from it. The cost of performing the sort (assuming comparison-based
sorting algorithm) is O( ). If we perform such selections then the average cost of each selection is O( / ) = O( ). This
clearly indicates that sorting once is reducing the complexity of subsequent operations.
print("*")
function(20)
We can define the ‘ ’ terms according to the relation = + . The value of ‘ ’ increases by 1 for each iteration. The value contained in ‘ ’
at the iteration is the sum of the first ‘ ’ positive integers. If is the total number of iterations taken by the program, then the ℎ loop
terminates if:
( )
1 + 2+ ...+ = > ⟹ = O(√ ).
Find the complexity of the function given below.
def function(n):
i=1
count = 0
while i*i <n:
count = count +1
i=i+1
print(count)
function(20)
In the above-mentioned function the loop will end, if > ⟹ ( ) = O(√ ). This is similar to Problem-23.
What is the complexity of the program given below?
def function(n):
count = 0
for i in range(n/2, n):
j=1
while j + n/2 <= n:
k=1
while k <= n:
count = count + 1
k=k*2
j=j+1
print (count)
function(20)
Observe the comments in the following function.
def function(n):
count = 0
for i in range(n/2, n): #Outer loop execute n/2 times
j=1
while j + n/2 <= n: #Middle loop executes n/2 times
k=1
while k <= n: #Inner loop executes times
count = count + 1
k=k*2
j=j+1
print (count)
function(20)
The complexity of the above function is O( ).
What is the complexity of the program given below?
def function(n):
count = 0
for i in range(n/2, n):
j=1
while j + n/2 <= n:
k=1
while k <= n:
count = count + 1
k=k*2
j=j*2
print (count)
function(20)
Consider the comments in the following function.
def function(n):
count = 0
for i in range(n/2, n): #Outer loop execute n/2 times
j=1
while j + n/2 <= n: #Middle loop executes logn times
k=1
while k <= n: #Inner loop executes logn times
count = count + 1
k=k*2
j=j*2
print (count)
function(20)
The complexity of the above function is O( ).
Find the complexity of the program below.
def function(n):
count = 0
for i in range(n/2, n):
j=1
while j + n/2 <= n:
break
j=j*2
print (count)
function(20)
Consider the comments in the function below.
def function(n):
count = 0
for i in range(n/2, n): #Outer loop execute n/2 times
j=1
while j + n/2 <= n: #Middle loop has break statement
break
j=j*2
print (count)
function(20)
The complexity of the above function is O( ). Even though the inner loop is bounded by , but due to the break statement it is executing
only once.
Write a recursive function for the running time ( ) of the function given below. Prove using the iterative method that
( ) = Θ( ).
def function(n):
count = 0
if n <= 0:
return
for i in range(0, n):
for j in range(0, n):
count = count + 1
function(n-3)
print (count)
function(20)
Consider the comments in the function below:
def function(n):
count = 0
if n <= 0:
return
for i in range(0, n): #Outer loop executes n times
for j in range(0, n): #Outer loop executes n times
count = count + 1
function(n-3) #Recursive call
print (count)
function(20)
The recurrence for this code is clearly T( ) = ( − 3) + for some constant > 0 since each call prints out asterisks and calls
itself recursively on - 3. Using the iterative method, we get: ( ) = ( − 3) + . Using the master
theorem, we get ( ) = Θ( ).
Determine Θ bounds for the recurrence relation: ( ) = 2 + .
Using Divide and Conquer master theorem, we get: O( ).
Determine Θ bounds for the recurrence: ( ) = + + + .
( ) = (1) + ( − 1)
( ) = (1) + −
(( + 1)(2 + 1) ( + 1)
( )=1+ −
6 2
( ) =Q( )
We can use the master theorem for this problem.
Consider the following program:
def Fib(n):
if n == 0: return 0
elif n == 1: return 1
else: return Fib(n-1)+ Fib(n-2)
print(Fib(3))
The recurrence relation for the running time of this program is: ( ) = ( − 1) + ( − 2) + . Note T( ) has two recurrence
calls indicating a binary tree. Each step recursively calls the program for reduced by 1 and 2, so the depth of the recurrence tree is O( ).
The number of leaves at depth is 2 since this is a full binary tree, and each leaf takes at least O(1) computations for the constant factor.
Running time is clearly exponential in and it is O(2 ).
Running time of following program?
def function(n):
count = 0
if n <= 0:
return
for i in range(0, n):
j=1
while j <n:
j=j+i
count = count + 1
print (count)
function(20)
Consider the comments in the function below:
def function(n):
count = 0
if n <= 0:
return
for i in range(0, n): #Outer loop executes n times
j=1 #Inner loop executes j increase by the rate of i
while j <n:
j=j+i
count = count + 1
print (count)
function(20)
In the above code, inner loop executes / times for each value of . Its running time is × (∑ni=1 n/i) = O( ).
What is the complexity of ∑ ?
Using the logarithmic property, = + , we can see that this problem is equivalent to
= 1+ 2 + ⋯+ = (1 × 2 × … × ) = ( !) ≤ ( )≤
if n <= 0:
return
for i in range(0, 3): #This loop executes 3 times with recursive value of 0.8n value
function3(0.8 * n)
function3(20)
The recurrence for this piece of code is ( ) = (. 8 ) + O( ) = T(4/5 ) + O( ) = 4/5 T( ) + O( ). Applying master theorem, we get
T( ) = O( ).
Find the complexity of the recurrence: ( ) = 2 (√ ) +
The given recurrence is not in the master theorem format. Let us try to convert this to the master theorem format by assuming =
2 . Applying the logarithm on both sides gives, = 2⟹ = . Now, the given function becomes:
( ) = (2 ) = 2 √2 + =2 2 + .
To make it simple we assume ( ) = (2 ) ⟹ ( ) = (2 ) ⟹ ( ) = 2 + .
Applying the master theorem format would result in ( ) = O( ).
If we substitute = back, ( ) = ( ) = O(( ) ).
Find the complexity of the recurrence: ( ) = (√ ) + 1
Applying the logic of Problem-40 gives ( ) = + 1. Applying the master theorem would result in ( ) = O( ).
Substituting = , gives ( ) = ( ) = O( ).
Find the complexity of the recurrence: ( ) = 2 (√ ) + 1
Applying the logic of Problem-40 gives: ( ) = 2 + 1. Using the master theorem results ( ) = O = O( ).
Substituting = gives ( ) = O( ).
Find the complexity of the below function.
import math
count = 0
def function(n):
global count
if n <= 2:
return 1
else:
function(round(math.sqrt(n)))
count = count + 1
return count
print(function(200))
Consider the comments in the function below:
import math
count = 0
def function(n):
global count
if n <= 2:
return 1
else:
function(round(math.sqrt(n))) #Recursive call with √ value
count = count + 1
return count
print(function(200))
For the above code, the recurrence function can be given as: ( ) = (√ ) + 1. This is same as that of Problem-41.
Analyze the running time of the following recursive pseudo-code as a function of .
def function(n):
if (n < 2):
return
else:
counter = 0
for i in range(0,8):
function (n/2)
for i in range(0,n**3):
counter = counter + 1
Consider the comments in below pseudo-code and call running time of function(n) as ( ).
def function(n):
if (n < 2): # Constant time
return
else:
counter = 0 # Constant time
for i in range(0,8): # This loop executes 8 times with n value half in every call
function (n/2)
for i in range(0,n**3): # This loop executes times with constant time loop
counter = counter + 1
( ) can be defined as follows:
( ) = 1 < 2,
=8 ( ) + + 1 ℎ .
2
Using the master theorem gives: ( ) = Θ( ) = Θ( ).
Find the complexity of the below pseudocode.
count = 0
def function(n):
global count
count = 1
if n <= 0:
return
for i in range(0, n):
count = count + 1
n = n//2
function(n)
print (count)
function(200)
Consider the comments in the pseudocode below:
count = 0
def function(n):
global count
count = 1
if n <= 0:
return
for i in range(1, n): # This loops executes n times
count = count + 1
n = n//2 #Integer Division
function(n) #Recursive call with value
print (count)
function(200)
The recurrence for this function is ( ) = ( /2) + . Using master theorem, we get ( ) = O( ).
Running time of the following program?
def function(n):
for i in range(1, n):
j=1
while j <= n:
j=j*2
print("*")
function(20)
Consider the comments in the below function:
def function(n):
for i in range(1, n): # This loops executes n times
j=1
while j <= n: # This loops executes times from our logarithm’s guideline
j=j*2
print("*")
function(20)
Complexity of above program is: O( ).
Running time of the following program?
def function(n):
for i in range(0, n/3):
j=1
while j <= n:
j=j+4
print("*")
function(20)
Consider the comments in the below function:
def function(n):
for i in range(0, n/3): #This loops executes n/3 times
j=1
while j <= n: #This loops executes n/4 times
j=j+4
print("*")
function(20)
The time complexity of this program is: O( ).
Find the complexity of the below function:
def function(n):
if n <= 0:
return
print ("*")
function(n/2)
function(n/2)
print ("*")
function(20)
Consider the comments in the below function:
def function(n):
if n <= 0: #Constant time
return
print ("*") #Constant time
function(n/2) #Recursion with n/2 value
function(n/2) #Recursion with n/2 value
print ("*")
function(20)
The recurrence for this function is: ( ) = 2 + 1. Using master theorem, we get ( ) = O( ).
Find the complexity of the below function:
count = 0
def logarithms(n):
i=1
global count
while i <= n:
j=n
while j > 0:
j = j//2
count = count + 1
i= i * 2
return count
print(logarithms(10))
count = 0
def logarithms(n):
i=1
global count
while i <= n:
j=n
while j > 0:
j = j//2 # This loops executes times from our logarithm’s guideline
count = count + 1
i= i * 2 # This loops executes times from our logarithm’s guideline
return count
print(logarithms(10))
Time Complexity: O( ∗ ) = O( ).
∑ ( ), where O( ) stands for order is:
(a) O( ) (b) O( ) (c) O( ) (d) O(3 ) (e) O(1.5 )
( ). ∑ ( ) = O( ) ∑ 1 = O( ).
1.27 Algorithms Analysis: Problems & Solutions 37
Data Structures and Algorithmic Thinking with Python Introduction
def function(n):
for i in range(1, n): # Executes n times
j=i
while j <i*i: # Executes n*n times
j=j+1
if j %i == 0:
for k in range(0, j): #Executes j times = (n*n) times
print(" * ")
function(10)
Time Complexity: O( ).
To calculate 9 , give an algorithm and discuss its complexity.
Start with 1 and multiply by 9 until reaching 9 .
Time Complexity: There are − 1 multiplications and each takes constant time giving a Q( ) algorithm.
For Problem-58, can we improve the time complexity?
Refer to the chapter.
Find the complexity of the below function:
def function(n):
sum = 0
for i in range(0, n-1):
if i > j:
sum = sum + 1
else:
for k in range(0, j):
sum = sum - 1
print (sum)
function(10)
Consider the − and we can ignore the value of j.
def function(n):
sum = 0
for i in range(0, n-1): # Executes times
if i > j:
sum = sum + 1 # Executes times
else:
for k in range(0, j): # Executes times
sum = sum - 1
print (sum)
function(10)
Time Complexity: O( ).
Solve the following recurrence relation using the recursion tree method: T( )=T( ) +T( )+ .
How much work do we do in each level of the recursion tree?
T( )
T( ) T( )
2
T( ) T( ) T( ) T( )
2 3
T( ) T( ) T( ) T( ) 2 T( ) T( ) T( ) T( ) 2
2 3 2 3
In level 0, we take time. At level 1, the two subproblems take time:
1 2 1 4 25
+ = + =
2 3 4 9 36
At level 2 the four subproblems are of size , , , and respectively. These two subproblems take time:
1 1 1 4 625 25
+ + + = =
4 3 3 9 1296 36
T( ) ≤
1
=
1−∝
1
= 25
1−
36
1
= 11
36
36
=
11
= O( )
That is, the first level provides a constant fraction of the total runtime.
Find the time complexity of recurrence T(n) = T( ) + T( ) + T( ) + .
: Let us solve this problem by method of guessing. The total size on each level of the recurrence tree is less than , so we guess that
( ) = will dominate. Assume for all < that ≤ T( ) ≤ . Then,
+ + + ≤ T( ) ≤ + + +
( + + + ) ≤ T( ) ≤ ( + + + )
( + ) ≤ T( ) ≤ ( + )
If ≥ 8k and ≤ 8k, then = T( ) = . So, T( ) = Θ( ). In general, if you have multiple recursive calls, the sum of the arguments to
those calls is less than n (in this case + + < ), and ( ) is reasonably large, a good guess is T( ) = Θ(f( )).
Chapter
Recursion and
Backtracking 2
2.1 Introduction
In this chapter, we will look at one of the important topics, “ ”, which will be used in almost every chapter, and also its relative
“ ”.
This definition can easily be converted to recursive implementation. Here the problem is determining the value of !, and the subproblem is
determining the value of ( − )!. In the recursive case, when is greater than 1, the function calls itself to determine the value of ( − )! and
multiplies that with .
In the base case, when is 0 or 1, the function simply returns 1. This looks like the following:
// calculates factorial of a positive integer
def factorial(n):
if n == 0: return 1
return n*factorial(n-1)
print(factorial(6))
2.1 Introduction 41
Data Structures and Algorithmic Thinking with Python Recursion and Backtracking
printFunc(3)
Returns 0 printFunc(2)
printFunc(1)
Returns 0
Returns 0
Now, let us consider our factorial function. The visualization of factorial function with = 4 will look like:
4!
4* 3!
Returns 1
2.6 Recursion versus Iteration
While discussing recursion, the basic question that comes to mind is: which way is better? – iteration or recursion? The answer to this question
depends on what we are trying to do. A recursive approach mirrors the problem that we are trying to solve. A recursive approach makes it
simpler to solve a problem that may not have the most obvious of answers. But recursion adds overhead for each recursive call (needs space
on the stack frame).
Recursion
· Terminates when a base case is reached.
· Each recursive call requires extra space on the stack frame (memory).
· If we get infinite recursion, the program may run out of memory and result in stack overflow.
· Solutions to some problems are easier to formulate recursively.
Iteration
· Terminates when a condition is proven to be false.
· Each iteration does not require extra space.
· An infinite loop could loop forever since there is no extra memory being created.
· Iterative solutions to a problem may not always be as obvious as a recursive solution.
didn't, it isn't. Backtracking can be thought of as a selective tree/graph traversal method. The tree is a way of representing some initial starting
position (the root node) and a final goal state (one of the leaves). Backtracking allows us to deal with situations in which a raw brute-force
approach would explode into an impossible number of options to consider. Backtracking is a sort of refined brute force. At each node, we
eliminate choices that are obviously not possible and proceed to recursively check only those that have potential.
What’s interesting about backtracking is that we back up only as far as needed to reach a previous decision point with an as-yet-unexplored
alternative. In general, that will be at the most recent decision point. Eventually, more and more of these decision points will have been fully
explored, and we will have to backtrack further and further. If we backtrack all the way to our initial state and have explored all alternatives
from there, we can conclude the particular problem is unsolvable. In such a case, we will have done all the work of the exhaustive recursion
and known that there is no viable solution possible.
· Sometimes the best algorithm for a problem is to try all possibilities.
· This is always slow, but there are standard tools that can be used to help.
· Tools: algorithms for generating basic objects, such as binary strings [2 possibilities for -bit string], permutations
[ !], combinations [ !/ ! ( − )!], general strings [ −ary strings of length has possibilities], etc...
· Backtracking speeds the exhaustive search by pruning.
def bitStrings(n):
if n == 0: return []
if n == 1: return ["0", "1"]
return [ digit+bitstring for digit in bitStrings(1)
for bitstring in bitStrings(n-1)]
print (bitStrings(4))
Let ( ) be the running time of ( ). Assume function takes time O(1).
, if < 0
( )=
2 ( − 1) + , otherwise
Using Subtraction and Conquer Master theorem we get: ( ) = O(2 ). This means the algorithm for generating bit-strings is optimal.
Generate all the strings of length drawn from 0. . . − 1.
Let us assume we keep current k-ary string in an array [0. . − 1]. Call function - (n, k):
def rangeToList(k):
result = []
for i in range(0,k):
result.append(str(i))
return result
def baseKStrings (n,k):
if n == 0: return []
if n == 1: return rangeToList(k)
return [ digit+bitstring for digit in baseKStrings (1,k)
for bitstring in baseKStrings (n-1,k)]
print (baseKStrings (4,3))
Let ( ) be the running time of − ( ). Then,
, <0
( )=
( − 1) + , ℎ
Using Subtraction and Conquer Master theorem we get: ( ) = O( ).
For more problems, refer to ℎ chapter.
Solve the recurrence T( ) = 2T( − 1) + 2 .
: At each level of the recurrence tree, the number of problems is double from the previous level, while the amount of work being
done in each problem is half from the previous level. Formally, the level has 2 problems, each requiring 2 work. Thus the level
requires exactly 2 work. The depth of this tree is , because at the level, the originating call will be T( − ). Thus, the total complexity
for T( ) is T( 2 ).
: Given a matrix, each of which may be 1 or
0. The filled cells that are connected form a region. Two cells are said to be connected if they are adjacent to each other horizontally,
vertically or diagonally. There may be several regions in the matrix. How do you find the largest region (in terms of number of cells) in the
matrix?
Sample Input 11000 Sample Output: 5
01100
00101
10001
01011
The simplest idea is: for each location traverse in all 8 directions and in each of those directions keep track of maximum region
found.
def getval(A, i, j, L, H):
if (i< 0 or i >= L or j< 0 or j >= H):
return 0
else:
return A[i][j]
def findMaxBlock(A, r, c, L, H, size):
global maxsize
global cntarr
if ( r >= L or c >= H):
return
cntarr[r][c]=1
size += 1
if (size > maxsize):
maxsize = size
#search in eight directions
direction=[[-1,0],[-1,-1],[0,-1],[1,-1],[1,0],[1,1],[0,1],[-1,1]]
for i in range(0,7):
newi =r+direction[i][0]
newj=c+direction[i][1]
val=getval (A, newi, newj, L, H)
if (val>0 and (cntarr[newi][newj]==0)):
findMaxBlock(A, newi, newj, L, H, size)
cntarr[r][c]=0
def getMaxOnes(A, rmax, colmax):
global maxsize
global size
global cntarr
for i in range(0,rmax):
for j in range(0,colmax):
if (A[i][j] == 1):
findMaxBlock (A, i, j, rmax, colmax, 0)
return maxsize
zarr=[[1,1,0,0,0],[0,1,1,0,1],[0,0,0,1,1],[1,0,0,1,1],[0,1,0,1,1]]
rmax = 5