0% found this document useful (0 votes)
6 views

Algorithm Design

Uploaded by

laluyazeed8
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Algorithm Design

Uploaded by

laluyazeed8
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 579

Algorithm Design

Gnanavel R

BITS Pilani Asst. Prof


Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

Dynamic Programming

2
Dynamic programming;

Dynamic programming: It solves problems by combining


the results of solved overlapping subproblems.

Calculate Fibonacci numbers.


Fibonacci numbers are number that following fibonacci
sequence, starting form the basic cases
F(1) = 1,
F(2) = 1.
F(n) = F(n-1) + F(n-2) for n larger than 2.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example
Solving the F(i) for positive number i smaller than n,
F(6) for example, solves subproblems as below.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Overlapping sub problems

From the previous example, there are some sub problems


being calculated multiple times.
By caching the results, we make solving the same sub
problem the second time effortless.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Basic Idea

Basic Idea (version 1): What we want to do is take our


problem and somehow break it down into a reasonable
number of subproblems in such a way that we can use
optimal solutions to the smaller subproblems to give us
optimal solutions to the larger ones.

Unlike divide-and-conquer (as in mergesort or quicksort) it


is OK if our subproblems overlap, so long as there are
not too many of them.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Basic Idea

Basic Idea (version 2): Suppose you have a recursive


algorithm for some problem that gives you a really bad
recurrence like T(n) = 2T(n−1)+n.
However, suppose that many of the subproblems you reach
as you go down the recursion tree are the same.
Then you can hope to get a big savings if you store your
computations so that you only compute each different
subproblem once.
You can store these solutions in an array or hash table.
This view of Dynamic Programming is often called
memoizing.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Solving Subproblems

There are two ways for solving sub problems while caching
the results:
Top-down approach: start with the original problem(F(n) in
this case), and recursively solving smaller and smaller
cases(F(i)) until we have all the ingredient to the original
problem.

Bottom-up approach: start with the basic cases(F(1) and


F(2) in this case), and solving larger and larger cases.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Top Down Fibonacci algorithm
Let's try to understand this by taking an example of Fibonacci numbers.

Fibonacci (n) = 1; if n = 0
Fibonacci (n) = 1; if n = 1
Fibonacci (n) = Fibonacci(n-1) + Fibonacci(n-2)

So, the first few numbers in this series will be: 1, 1, 2, 3, 5, 8, 13, 21...
and so on!
Algorithm Fib(n) void fib () {
int fib (int n) { fibresult[0] = 0;
fibresult[1] = 1;
if (n < 2)
for (int i = 2; i<n; i++)
return 1;
fibresult[i] = fibresult[i-1] + fibresult[i-2];
return fib(n-1) + fib(n-2); }
}

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Bottom UP

Algorithm Fib(n)
int fib (int n) {
if (n < =1)
F[0]=0; F[1]=1
for i = 2 to n
F[i]=F[i-2]+F[i-1];
return F[n]
}

10

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


MATRIX-MULTIPLY(A, B)
if columns[A]  rows[B]
then error “incompatible dimensions”
else for i  1 to rows[A]
do for j  1 to columns[B]
do C[i, j] = 0
for k  1 to columns[A]
do C[i, j]  C[i, j] + A[i, k] B[k, j]
k
j cols[B]
j cols[B]

i = i
* k
A B C
rows[A] 11
rows[A]
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
12

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


13

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


14

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


15

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


16

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Matrix-Chain Multiplication
In what order should we multiply the matrices?
A1  A2  An
Parenthesize the product to get the order in which matrices
are multiplied
E.g.: A1  A2  A3 = ((A1  A2)  A3)
= (A1  (A2  A3))
Which one of these orderings should we choose?
– The order in which we multiply the matrices has a significant impact on the cost
of evaluating the product

17

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example
A1  A2  A3
A1: 10 x 100
A2: 100 x 5
A3: 5 x 50
1. ((A1  A2)  A3): A1  A2 = 10 x 100 x 5 = 5,000 (10 x 5)
((A1  A2)  A3) = 10 x 5 x 50 = 2,500
Total: 7,500 scalar multiplications
2. (A1  (A2  A3)): A2  A3 = 100 x 5 x 50 = 25,000 (100 x 50)
(A1  (A2  A3)) = 10 x 100 x 50 = 50,000
Total: 75,000 scalar multiplications
one order of magnitude difference!!
18

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Matrix-Chain Multiplication:
Problem Statement
Given a chain of matrices A1, A2, …, An, where Ai has
dimensions pi-1x pi, fully parenthesize the product A1  A2
 An in a way that minimizes the number of scalar
multiplications.
A1  A2  Ai  Ai+1  An
p0 x p1 p1 x p2 pi-1 x pi pi x pi+1 pn-1 x pn

19

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Enumeration Approach
Matrix Chain-Product Alg.:
– Try all possible ways to parenthesize A=A 0*A1*…*An-1
– Calculate number of ops for each one
– Pick the one that is best
Running time:
– The number of parenthesizations is equal to the
number of binary trees with n nodes
– This is exponential!
– It is called the Catalan number, and it is almost 4n.
– This is a terrible algorithm!

20
Dynamic Programming
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Use Dynamic Programming

1. Characterize the structure of an optimal solution


2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution bottom-up
4. Construct an optimal solution from the computed
information

21

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


1. The Structure of an Optimal
Parenthesization
Notation:
Ai…j = Ai Ai+1  Aj, i  j

Suppose that an optimal parenthesization of Ai…j splits the


product between Ak and Ak+1, where ik<j

Ai…j = Ai Ai+1  Aj


= Ai Ai+1  Ak Ak+1  Aj
= Ai…k Ak+1…j

22

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Optimal Substructure
Ai…j = Ai…k Ak+1…j

The parenthesization of the “prefix” Ai…k must be an optimal


parentesization

An optimal solution to an instance of the matrix-chain


multiplication contains within it optimal solutions to
subproblems

23

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


2. A Recursive Solution
Subproblem:

determine the minimum cost of parenthesizing Ai…j


= Ai Ai+1  Aj for 1  i  j  n

Let m[i, j] = the minimum number of multiplications needed


to compute Ai…j
– full problem (A1..n): m[1, n]

– i = j: Ai…i = Ai  m[i, i] = 0, for i = 1, 2, …, n

24

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


2. A Recursive Solution
Consider the subproblem of parenthesizing Ai…j = Ai
Ai+1  Aj for 1  i  j  n
pi-1pkpj
= Ai…k Ak+1…j for i  k < j
m[i, k] m[k+1,j]
Assume that the optimal parenthesization splits the product
Ai Ai+1  Aj at k (i  k < j)
m[i, j] =
m[i, k] + m[k+1, j] + pi-1pkpj
min # of multiplications min # of multiplications # of multiplications
to compute Ai…k to compute Ak+1…j to compute Ai…kAk…j 25

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


2. A Recursive Solution (cont.)
m[i, j] = m[i, k] + m[k+1, j] + pi-1pkpj

We do not know the value of k


– There are j – i possible values for k: k = i, i+1, …, j-1

Minimizing the cost of parenthesizing the product Ai Ai+1


 Aj becomes:
0 if i = j
m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j
ik<j

26

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm
Matrix-Chain-Order(p)
1 n ← length[p] − 1
2 for i ← 1 to n
3 do m[i, i] ← 0
4 for l ← 2 to n // l is the chain length.
5 do for i ← 1 to n − l + 1
6 do j ← i + l − 1
7 m[i, j] ← ∞
8 for k ← i to j − 1
9 do q ← m[i, k] + m[k + 1, j] + pi−1pkpj
10 if q < m[i, j]
11 then m[i, j] ← q
12 s[i, j] ← k
13 return m and s

27

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


3. Computing the Optimal Costs

0 if i = j
m[i, j] = min {m[i, k] + m[k+1, j] + p i-1pkpj} if i < j
ik<j

How do we fill in the tables m[1..n, 1..n]?


– Determine which entries of the table are used in computing m[i, j]

Ai…j = Ai…k Ak+1…j


– Subproblems’ size is one less than the original size

– Idea: fill in m such that it corresponds to solving problems of


increasing length

28

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


4. Construct the Optimal Solution
In a similar matrix s we keep the optimal values of k
1 2 3 n
s[i, j] = a value of k such
that an optimal parenthesization n
of Ai..j splits the product
between Ak and Ak+1 k
j
3

2
1

29

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


4. Construct the Optimal Solution
s[1, n] is associated with the entire product A1..n
– The final matrix multiplication will be split at k = s[1, n]
1 2 3 n
A1..n = A1..s[1, n]  As[1, n]+1..n
– For each subproduct recursively n
find the corresponding value of k
that results in an optimal parenthesization

j
3

2
1

30

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Elements of Dynamic Programming
Optimal Substructure
– An optimal solution to a problem contains within it an optimal solution to
subproblems
– Optimal solution to the entire problem is build in a bottom-up manner from
optimal solutions to subproblems

Overlapping Subproblems
– If a recursive algorithm revisits the same subproblems over and over  the
problem has overlapping subproblems

31

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Parameters of Optimal Substructure
How many subproblems are used in an optimal solution for
the original problem
– Matrix multiplication: Two subproblems (subproducts Ai..k, Ak+1..j)

How many choices we have in determining which


subproblems to use in an optimal solution
– Matrix multiplication: j - i choices for k (splitting the product)

32

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Parameters of Optimal Substructure
Intuitively, the running time of a dynamic programming
algorithm depends on two factors:
– Number of subproblems overall
– How many choices we look at for each subproblem

Matrix multiplication:
– (n2) subproblems (1  i  j  n)
– At most n-1 choices

(n3) overall
33

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Knapsack problem

Given some items, pack the knapsack to get


the maximum total value. Each item has some
weight and some value. Total weight that we can
carry is no more than some fixed number W.
So we must consider weights of items as well as
their values.

Item # Weight Value


1 1 8
2 3 6
3 5 5
Knapsack problem

There are two versions of the problem:


1. “0-1 knapsack problem”
• Items are indivisible; you either take an item or not. Some special instances
can be solved with dynamic programming

2. “Fractional knapsack problem”


• Items are divisible: you can take any fraction of an item
0-1 Knapsack problem

• Given a knapsack with maximum capacity W, and a set S


consisting of n items
• Each item i has some weight wi and benefit value bi (all wi and
W are integer values)
• Problem: How to pack the knapsack to achieve maximum total
value of packed items?
0-1 Knapsack problem

• Problem, in other words, is to find


max b
iT
i subject to w
iT
i W

The problem is called a “0-1” problem, because each item


must be entirely accepted or rejected.
0-1 Knapsack problem: brute-force approach

Let’s first solve this problem with a straightforward algorithm


• Since there are n items, there are 2n possible combinations
of items.
• We go through all combinations and find the one with
maximum value and with total weight less or equal to W
• Running time will be O(2n)
0-1 Knapsack problem:
dynamic programming approach
• We can do better with an algorithm based on dynamic
programming

• We need to carefully identify the subproblems


Defining a Subproblem

• Given a knapsack with maximum capacity W, and a set S


consisting of n items
• Each item i has some weight wi and benefit value bi (all wi and
W are integer values)
• Problem: How to pack the knapsack to achieve maximum total
value of packed items?
Defining a Subproblem
• We can do better with an algorithm based on dynamic
programming

• We need to carefully identify the subproblems


Let’s try this:
If items are labeled 1..n, then a subproblem
would be to find an optimal solution for
Sk = {items labeled 1, 2, .. k}
Defining a Subproblem

If items are labeled 1..n, then a subproblem would be to find


an optimal solution for Sk = {items labeled 1, 2, .. k}
Defining a Subproblem

• Let’s add another parameter: w, which will represent the


maximum weight for each subset of items

• The subproblem then will be to compute V[k,w], i.e., to find an


optimal solution for Sk = {items labeled 1, 2, .. k} in a
knapsack of size w
Recursive Formula for subproblems

• The subproblem will then be to compute V[k,w], i.e., to find an


optimal solution for Sk = {items labeled 1, 2, .. k} in a
knapsack of size w

• Assuming knowing V[i, j], where i=0,1, 2, … k-1,


j=0,1,2, …w, how to derive V[k,w]?
Recursive Formula for subproblems (continued)

Recursive formula for subproblems:


 V [k − 1, w] if wk  w
V [ k , w] = 
 max{V [k − 1, w],V [ k − 1, w − wk ] + bk } else

It means, that the best subset of Sk that has total weight w is:
1) the best subset of Sk-1 that has total weight
 w, or
2) the best subset of Sk-1 that has total weight
 w-wk plus the item k
Recursive Formula

 V [k − 1, w] if wk  w
V [ k , w] = 
max{V [k − 1, w],V [k − 1, w − wk ] + bk } else

• The best subset of Sk that has the total weight  w, either


contains item k or not.
• First case: wk>w. Item k can’t be part of the solution, since if it
was, the total weight would be > w, which is unacceptable.
• Second case: wk  w. Then the item k can be in the solution, and
we choose the case with greater value.
0-1 Knapsack Algorithm

for w = 0 to W
V[0,w] = 0
for i = 1 to n
V[i,0] = 0
for i = 1 to n
for w = 0 to W
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Running time

for w = 0 to W
O(W)
V[0,w] = 0
for i = 1 to n
V[i,0] = 0
for i = 1 to n Repeat n times
for w = 0 to W O(W)
< the rest of the code >
What is the running time of this algorithm?
O(n*W)
Remember that the brute-force algorithm
takes O(2n)
Page 49
Page 50
Example

Let’s run our algorithm on the


following data:

n = 4 (# of elements)
W = 5 (max weight)
Elements (weight, benefit):
(2,3), (3,4), (4,5), (5,6)
Example (2)

i\W 0 1 2 3 4 5
0 0 0 0 0 0 0
1
2
3
4

for w = 0 to W
V[0,w] = 0
Example (3)

i\W 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0

for i = 1 to n
V[i,0] = 0
Example (4) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=1 4: (5,6)
0 0 0 0 0 0 0 bi=3
1 0 0
wi=2
2 0
w=1
3 0
w-wi =-1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (5) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=1 4: (5,6)
0 0 0 0 0 0 0 bi=3
1 0 0 3
wi=2
2 0
w=2
3 0
w-wi =0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (6) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=1 4: (5,6)
0 0 0 0 0 0 0 bi=3
1 0 0 3 3
wi=2
2 0
w=3
3 0
w-wi =1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (7) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=1 4: (5,6)
0 0 0 0 0 0 0 bi=3
1 0 0 3 3 3
wi=2
2 0
w=4
3 0
w-wi =2
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (8) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=1 4: (5,6)
0 0 0 0 0 0 0 bi=3
1 0 0 3 3 3 3
wi=2
2 0
w=5
3 0
w-wi =3
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (9) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=2 4: (5,6)
0 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0
w=1
3 0
w-wi =-2
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (10) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=2 4: (5,6)
0 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0 3
w=2
3 0
w-wi =-1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (11) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=2 4: (5,6)
0 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0 3 4
w=3
3 0
w-wi =0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (12) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=2 4: (5,6)
0 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0 3 4 4
w=4
3 0
w-wi =1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (13) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=2 4: (5,6)
0 0 0 0 0 0 0 bi=4
1 0 0 3 3 3 3
wi=3
2 0 0 3 4 4 7
w=5
3 0
w-wi =2
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (14) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=3 4: (5,6)
0 0 0 0 0 0 0 bi=5
1 0 0 3 3 3 3
wi=4
2 0 0 3 4 4 7
w= 1..3
3 0 0 3 4
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (15) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=3 4: (5,6)
0 0 0 0 0 0 0 bi=5
1 0 0 3 3 3 3
wi=4
2 0 0 3 4 4 7
w= 4
3 0 0 3 4 5
w- wi=0
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (16) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=3 4: (5,6)
0 0 0 0 0 0 0 bi=5
1 0 0 3 3 3 3
wi=4
2 0 0 3 4 4 7
w= 5
3 0 0 3 4 5 7
w- wi=1
4 0
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (17) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=4 4: (5,6)
0 0 0 0 0 0 0 bi=6
1 0 0 3 3 3 3
wi=5
2 0 0 3 4 4 7
w= 1..4
3 0 0 3 4 5 7
4 0 0 3 4 5
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Example (18) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=4 4: (5,6)
0 0 0 0 0 0 0 bi=6
1 0 0 3 3 3 3
wi=5
2 0 0 3 4 4 7
w= 5
3 0 0 3 4 5 7
w- wi=0
4 0 0 3 4 5 7
if wi <= w // item i can be part of the solution
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Comments

• This algorithm only finds the max possible value that can be
carried in the knapsack
– i.e., the value in V[n,W]
• To know the items that make this maximum value, an addition
to this algorithm is necessary
How to find actual Knapsack Items

• All of the information we need is in the table.


• V[n,W] is the maximal value of items that can be placed in the
Knapsack.
• Let i=n and k=W
if V[i,k]  V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1 // Assume the ith item is not in the
knapsack
Could it be in the optimally packed
//
knapsack?
Finding the Items Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=4 4: (5,6)
0 0 0 0 0 0 0 k= 5
1 0 0 3 3 3 3 bi=6
2 0 0 3 4 4 7 wi=5
3 0 0 3 4 5 7 V[i,k] = 7
V[i−1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k]  V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (2) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=4 4: (5,6)
0 0 0 0 0 0 0 k= 5
1 0 0 3 3 3 3 bi=6
2 0 0 3 4 4 7 wi=5
3 0 0 3 4 5 7 V[i,k] = 7
V[i−1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k]  V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (3) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=3 4: (5,6)
0 0 0 0 0 0 0 k= 5
1 0 0 3 3 3 3 bi=5
2 0 0 3 4 4 7 wi=4
3 0 0 3 4 5 7 V[i,k] = 7
V[i−1,k] =7
4 0 0 3 4 5 7
i=n, k=W
while i,k > 0
if V[i,k]  V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (4) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=2 4: (5,6)
0 0 0 0 0 0 0 k= 5
1 0 0 3 3 3 3 bi=4
2 0 0 3 4 4 7 wi=3
3 0 0 3 4 5 7 V[i,k] = 7
V[i−1,k] =3
4 0 0 3 4 5 7
k − wi=2
i=n, k=W
while i,k > 0
if V[i,k]  V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (5) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=1 4: (5,6)
0 0 0 0 0 0 0 k= 2
1 0 0 3 3 3 3 bi=3
2 0 0 3 4 4 7 wi=2
3 0 0 3 4 5 7 V[i,k] = 3
V[i−1,k] =0
4 0 0 3 4 5 7
k − wi=0
i=n, k=W
while i,k > 0
if V[i,k]  V[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (6) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 i=0 4: (5,6)
0 0 0 0 0 0 0 k= 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7 The optimal
knapsack should
4 0 0 3 4 5 7
contain {1, 2}
i=n, k=W
while i,k > 0
if V[i,k]  V[i−1,k] then
mark the nth item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Finding the Items (7) Items:
1: (2,3)
2: (3,4)
3: (4,5)
i\W 0 1 2 3 4 5 4: (5,6)
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7 The optimal
knapsack should
4 0 0 3 4 5 7
contain {1, 2}
i=n, k=W
while i,k > 0
if V[i,k]  V[i−1,k] then
mark the nth item as in the knapsack
i = i−1, k = k-wi
else
i = i−1
Thank You!!

78

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm Design
Gnanavel R

BITS Pilani Asst. Prof


Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

Huffman Coding

2
Text Compression

• Given a string X, efficiently encode X into a


smaller bit string Y.
• Easy Solution
Since there are only 26 characters in English
Language & there are 32 bit string of length 5,
Each alphabet can be represented by a bit string
of length 5.
• This is called a fixed length code.

BITS Pilani, Pilani Campus


Text Compression

Question
Is it possible to find coding scheme, in which
fewer bits are used.
Answer – Variable length codes
• Alphabets that occur more frequently should
be encoded using short bit strings &
rarely occurring alphabets should be encoded
using long bit strings

BITS Pilani, Pilani Campus


Example

a b c d e f
Frequency 45 13 12 16 9 5
Fixed 000 001 010 011 100 101
length
Variable 0 101 100 111 1101 1100
length

A file with 100000 character contains only the alphabets a – f.

• Fixed length code requires 300000 bits to code the file.

• Variable length coding requires 224000 bits to code the file.

BITS Pilani, Pilani Campus


Prefix Code

• Important
In variable length code, some method must be
used to determine where the bits for each
character start and end.
For Example: If e : 0, a : 1, t : 01
Then 0101 could correspond to eat, tea or eaea.
A prefix code is a binary code such that no code
word is the prefix of another code-word

BITS Pilani, Pilani Campus


Encoding Tree

Key: A binary code can be represented by a binary tree.


• Each external node stores a character
• The code word of a character is given by the path from the root to the
external node storing the character (0 for a left child and 1 for a right child)

00 010 011 10 11

a b c d e
a d e

b c

BITS Pilani, Pilani Campus


The Basic Algorithm
Huffman coding is a form of statistical coding
Not all characters occur with the same frequency!
Yet all characters are allocated the same amount of space

– 1 char = 1 byte, be it e x
or

CS 102
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
The (Real) Basic Algorithm
1. Scan text to be compressed and tally occurrence of all
characters.
2. Sort or prioritize characters based on number of
occurrences in text.
3. Build Huffman code tree based on prioritized list.
4. Perform a traversal of tree to determine all code words.
5. Scan text again and create new file using the Huffman
codes.

CS 102
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
The Huffman Coding algorithm-
History

• In 1951, David Huffman and his MIT information theory


classmates given the choice of a term paper or a final
exam
• Huffman hit upon the idea of using a frequency-sorted
binary tree and quickly proved this method the most
efficient.
• In doing so, the student outdid his professor, who had
worked with information theory inventor Claude
Shannon to develop a similar code.
• Huffman built the tree from the bottom up instead of
from the top down
BITS Pilani, Pilani Campus
Huffman Code

• Greedy algorithm that constructs an optimal prefix code.


- Algorithm builds the tree in a bottom up manner.
- C be set of n alphabets and f[c] be the frequency of c  C.
- Algorithm begins with a set of |C| leaves and performs a
sequence of |C| - 1 merging operations to create the final
tree.
- A priority queue Q keyed on f values, is used to identify the
two least frequent objects to merge together.
- The result of the merger of two objects is a new object whose
frequency is the sum of two frequencies of the two objects
that are merged. The new object is inserted back into Q.

BITS Pilani, Pilani Campus


Huffman’s Algorithm

Algorithm HuffmanEncoding(X)
• It runs in time Input string X of size n
O(n + d log d), Output optimal encoding trie for X
where n is the size C  distinctCharacters(X)
of X and d is the computeFrequencies(C, X)
Q  new empty heap
number of distinct for all c  C
characters of X T  new single-node tree storing c
• A heap-based Q.insert(getFrequency(c), T)
priority queue is while Q.size() > 1
f1  Q.minKey()
used as an
T1  Q.removeMin()
auxiliary structure f2  Q.minKey()
T2  Q.removeMin()
T  join(T1, T2)
Q.insert(f1 + f2, T)
return Q.removeMin()

BITS Pilani, Pilani Campus


Example
11

X = abracadabra a 6
Frequencies 2 4
a b c d r
c d b r
5 2 1 1 2
6

2 4
a b c d r
5 2 1 1 2 a c d b r
5

2 2 4

a b c d r a c d b r
5 2 2 5
BITS Pilani, Pilani Campus
Huffman Tree Example

BITS Pilani, Pilani Campus


Encoding and Compression of Data
Fax Machines
ASCII
Variations on ASCII
– min number of bits needed
– cost of savings
– patterns
– modifications

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Building a Tree
Scan the original text
Consider the following short text:

Eerie eyes seen near lake.

Count up the occurrences of all characters in the text

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Building a Tree
Scan the original text

Eerie eyes seen near lake.


What characters are present?

E e r i space
ysnarlk.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Building a Tree
Scan the original text

Eerie eyes seen near lake.


What is the frequency of each character in the text?

Char Freq. Char Freq. Char Freq.


E 1 y 1 k 1
e 8 s 2 . 1
r 2 n 2
i 1 a 2
space 4 l 1

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Building a Tree
Prioritize characters
Create binary tree nodes with character and frequency of
each character
Place nodes in a priority queue
– The lower the occurrence, the higher the priority in the queue

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree
Prioritize characters
Uses binary tree nodes
public class HuffNode
{
public char myChar;
public int myFrequency;
public HuffNode myLeft, myRight;
}
priorityQueue myQueue;

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree
The queue after inserting all nodes

E i y l k . r s n a sp e
1 1 1 1 1 1 2 2 2 2 4 8

Null Pointers are not shown

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree
While priority queue contains two or more nodes
– Create new node
– Dequeue node and make it left subtree
– Dequeue next node and make it right subtree
– Frequency of new node equals sum of frequency of left and right children
– Enqueue new node back into queue

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

E i y l k . r s n a sp e
1 1 1 1 1 1 2 2 2 2 4 8

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

y l k . r s n a sp e
1 1 1 1 2 2 2 2 4 8

E i
1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

y l k . r s n a sp e
2
1 1 1 1 2 2 2 2 4 8
E i
1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

k . r s n a sp e
2
1 1 2 2 2 2 4 8
E i
1 1

y l
1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

2
k . r s n a 2 sp e
1 1 2 2 2 2 4 8
y l
1 1
E i
1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

r s n a 2 2 sp e
2 2 2 2 4 8
y l
E i 1 1
1 1

k .
1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

r s n a 2 2 sp e
2
2 2 2 2 4 8
E i y l k .
1 1 1 1 1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

n a 2 sp e
2 2
2 2 4 8
E i y l k .
1 1 1 1 1 1

r s
2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

n a 2 sp e
2 2 4
2 2 4 8

E i y l k . r s
1 1 1 1 1 1 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

2 4 e
2 2 sp
8
4
y l k . r s
E i 1 1 1 1 2 2
1 1

n a
2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

2 4 4 e
2 2 sp
8
4
y l k . r s n a
E i 1 1 1 1 2 2 2 2
1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

4 4 e
2 sp
8
4
k . r s n a
1 1 2 2 2 2

2 2

E i y l
1 1 1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

4 4 4
2 sp e
4 2 2 8
k . r s n a
1 1 2 2 2 2
E i y l
1 1 1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

4 4 4
e
2 2 8
r s n a
2 2 2 2
E i y l
1 1 1 1

2 sp
4
k .
1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

4 4 4 6 e
2 sp 8
r s n a 2 2
4
2 2 2 2 k .
E i y l 1 1
1 1 1 1

What is happening to the characters with a low number of occurrences?

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

4 6 e
2 2 2 8
sp
4
E i y l k .
1 1 1 1 1 1
8

4 4

r s n a
2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

4 6 e 8
2 2 2 8
sp
4 4 4
E i y l k .
1 1 1 1 1 1
r s n a
2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

8
e
8
4 4
10
r s n a
2 2 2 2 4
6
2 2
2 sp
4
E i y l k .
1 1 1 1 1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

8 10
e
8 4
4 4
6
2 2
r s n a 2 sp
2 2 2 2 4
E i y l k .
1 1 1 1 1 1

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

10
16
4
6
2 2 e 8
2 sp 8
4
E i y l k . 4 4
1 1 1 1 1 1

r s n a
2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

10 16

4
6
e 8
2 2 8
2 sp
4 4 4
E i y l k .
1 1 1 1 1 1
r s n a
2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree

26

16
10

4 e 8
6 8
2 2
2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree
After enqueueing this node
there is only one node left in
priority queue.
26

16
10

4 e 8
6 8
2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Building a Tree
Dequeue the single node left in the
queue.
26

This tree contains the new code 10


16
words for each character.
4 e 8
6 8
Frequency of root node should 2 2 2 sp 4 4
equal number of characters in text. 4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2

Eerie eyes seen near lake. 26 characters


CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Encoding the File
Traverse Tree for Codes
Perform a traversal of the tree to obtain new code words
Going left is a 0 going right is a 1
code word is only completed when a leaf node is reached
26

16
10

4 e 8
6 8
2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Encoding the File
Traverse Tree for Codes
Char Code
E 0000
i 0001
y 0010
l 0011 26
k 0100
. 0101 16
10
space 011
e 10 4
6
e 8
r 1100 8
2 2
s 1101 2 sp 4 4
n 1110 4
E i y l k .
a 1111 1 1 1 1 1 1 r s n a
2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Encoding the File
Rescan text and encode file using new code words
Eerie eyes seen near lake. Char Code
E 0000
i 0001
y 0010
l 0011
000010110000011001110001010110110100
k 0100
111110101111110001100111111010010010
. 0101
space 011
1 e 10
r 1100
s 1101
n 1110
a 1111
• Why is there no need
for a separator
character?
.
CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Decoding the File
How does receiver know what the codes are?
Tree constructed for each text file.
– Considers frequency for each file
– Big hit on compression, especially for smaller files
Tree predetermined
– based on statistical analysis of text files or file types
Data transmission is bit based versus byte based

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Decoding the File
Once receiver has tree it scans incoming bit
stream 26

0  go left 10
16

1  go right 4 e 8
6 8
2 2 2 sp 4 4
4
101000110111101111011 E i y l k .
1 1 1 1 1 1 r s n a
11110000110101 2 2 2 2

CS 102
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Algorithm Design
Gnanavel R

BITS Pilani Asst. Prof


Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

CS6: Greedy Method

2
Optimization Problem

In an optimization problem we are given a set of constraints


and an optimization function.
Solutions that satisfy the constraints are called feasible
solutions.
A feasible solution for which the optimization function has
the best possible value is called an optimal solution

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


The Greedy Method Technique
The greedy method is a general algorithm design
paradigm, built on the following elements:
– Configurations(feasible): different choices,
collections, or values to find
– objective function(Optimal) a score assigned to
configurations, which we want to either maximize or
minimize
It works best when applied to problems with the
greedy-choice property:
– a globally-optimal solution can always be found by a
series of local improvements from a starting
configuration.

4
The Greedy Method
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Greedy Method

Greedy algorithms seek to optimize a function by making


choices (greedy criterion) which are the best locally but
do not look at the global problem.
The result is a good solution but not necessarily the best
one.
The greedy algorithm does not always guarantee the
optimal solution however it generally produces solutions
that are very close in value to the optimal.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Minimum Spanning Tree
Spanning subgraph
– Subgraph of a graph G ORD
10
containing all the vertices of G
1 PIT
Spanning tree
– Spanning subgraph that is DEN 6 7
itself a (free) tree 9
Minimum spanning tree (MST) 3 DCA
– Spanning tree of a weighted 4 STL
graph with minimum total edge
weight 8 5 2
Applications
– Communications networks DFW ATL
– Transportation networks

11
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Kruskal’s Algorithm
A priority queue stores
the edges outside the
cloud
◼ Key: weight
◼ Element: edge
At the end of the
algorithm
◼ We are left with one
cloud that encompasses
the MST
◼ A tree T which is our
MST

12
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Kruskal’s Algorithm

13
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Kruskal’s Algorithm

14
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Kruskal’s Algorithm

15
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Kruskal’s Algorithm

16
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Partition-Based Implementation
A partition-based version of Kruskal’s Algorithm performs
cloud merges as unions and tests as finds.
Algorithm Kruskal(G):
Input: A weighted graph G.
Output: An MST T for G.
Let C be a cluster of the vertices of G, where each vertex forms a separate cluster.
Let Q be a priority queue storing the edges of G, sorted by their weights
Let T be an initially-empty tree
while Q is not empty do
(u,v)  Q.removeMinElement()
if C.find(u) != C.find(v) then Running time: O((n+m)log n)
Add (u,v) to T
C.union(u,v) //Merge two cluster
return T 17
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Kruskal
Example 2704
BOS
867

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
18
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
19
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
20
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
21
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
22
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
23
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
24
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
25
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
26
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
27
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
28
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
29
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example 2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
30
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example
2704
867 BOS

849 PVD
ORD 187
740 144
1846 621 JFK
184 1258
802
SFO BWI
1391
1464
337 1090
DFW 946
LAX 1235
1121
MIA
2342
31
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Prim-Jarnik’s Algorithm
Similar to Dijkstra’s algorithm (for a connected graph)
We pick an arbitrary vertex s and we grow the MST as a
cloud of vertices, starting from s
We store with each vertex v a label d(v) = the smallest
weight of an edge connecting v to a vertex in the cloud
At each step:
◼ We add to the cloud the vertex
u outside the cloud with the
smallest distance label
◼ We update the labels of the
vertices adjacent to u

32
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Prim-Jarnik’s Algorithm (cont.)
A priority queue stores the Algorithm PrimJarnikMST(G)
vertices outside the cloud Q  new heap-based priority queue
s  a vertex of G
– Key: distance for all v  G.vertices()
– Element: vertex if v = s
setDistance(v, 0)
Locator-based methods else
– insert(k,e) returns a locator setDistance(v, )
– replaceKey(l,k) changes the setParent(v, )
l  Q.insert(getDistance(v), v)
key of an item setLocator(v,l)
We store three labels with each while Q.isEmpty()
vertex: u  Q.removeMin()
for all e  G.incidentEdges(u)
– Distance z  G.opposite(u,e)
– Parent edge in MST r  weight(e)
– Locator in priorit queue if r  getDistance(z)
setDistance(z,r)
setParent(z,e)
Q.replaceKey(getLocator(z),r)
33
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Prim-Jarnik’s Algorithm (cont.)

34
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Prim-Jarnik’s Algorithm (cont.)

35
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Prim-Jarnik’s Algorithm (cont.)

36
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Prim-Jarnik’s Algorithm (cont.)

37
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example
 7
7 D 7 D
2 2
B 4 B 4
8 9  5 9 
2 5 F 2 5 F
C C
8 8
8 3 8 3
E E
A 7 A 7
0 7 0 7

7 7
7 D D
2 2 7
B 4 B 4
5 9  9 4
2 5 F 5 5
C 2 F
8 C
3 8
8 8 3
E E
A 7 A
0 7 7 7
0
38
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example (contd.)
7
7 D
2
B 4
5 9 4
2 5 F
C
8
8 3
E
A 3 7
0 7
7 D
2
B 4
5 9 4
2 5 F
C
8
8 3
E
A 3
0 7

39
Minimum Spanning Trees
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Shortest Path Problem
Given a weighted graph and two vertices u and v, we want to find a
path of minimum total weight between u and v.
– Length of a path is the sum of the weights of its edges.
Applications
– Internet packet routing
– Flight reservations
– Driving directions

PVD
ORD
SFO
LGA

HNL
LAX
DFW
MIA 40
Shortest Paths
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Shortest Path Properties
Property 1:
A subpath of a shortest path is itself a shortest path
Property 2:
There is a tree of shortest paths from a start vertex to all the other vertices
Example:
Tree of shortest paths from a vertex

PVD
ORD
SFO
LGA

HNL
LAX
DFW
MIA
41
Shortest Paths
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Dijkstra’s Algorithm
The distance of a vertex v from a We grow a “cloud” of vertices,
vertex s is the length of a beginning with s and eventually
shortest path between s and v covering all the vertices
Dijkstra’s algorithm computes the We store with each vertex v a
distances of all the vertices label d(v) representing the
from a given start vertex s distance of v from s in the
Assumptions: subgraph consisting of the
– the graph is connected cloud and its adjacent vertices
– the edges are undirected At each step
– the edge weights are – We add to the cloud the vertex
nonnegative u outside the cloud with the
smallest distance label, d(u)
– We update the labels of the
vertices adjacent to u

42
Shortest Paths
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Edge Relaxation
Consider an edge e = (u,z) such that
– u is the vertex most recently added to the cloud
– z is not in the cloud d(u) = 50
d(z) = 75
u e
s z
The relaxation of edge e
updates distance d(z) as follows:
d(z)  min{d(z),d(u) + weight(e)}

d(u) = 50
d(z) = 60
u e
s z

43
Shortest Paths
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Dijkstra’s Algorithm
Dijkstra(G)
for each v  V
d[v] = ;
d[s] = 0; S = ; Q = V;
while (Q  )
u = ExtractMin(Q);
S = S U {u};
for each v  u->Adj[]
if (d[v] > d[u]+w(u,v)) Relaxation
Note: this d[v] = d[u]+w(u,v);
is really a
Step
call to Q->DecreaseKey() David Luebke
44
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
45

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


46

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


47

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


48

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


49

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example
0 0
8 A 4 8 A 4
2 2
8 7 2 1 4 8 7 2 1 3
B C D B C D

 3 9  5 3 9 8
2 5 2 5
E F E F

0 0
8 A 4 8 A 4
2 2
8 7 2 1 3 7 7 2 1 3
B C D B C D

5 3 9 11 5 3 9 8
2 5 2 5
E F E F 50
Shortest Paths
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example (cont.)
0
8 A 4
2
7 7 2 1 3
B C D

5 3 9 8
2 5
E F
0
8 A 4
2
7 7 2 1 3
B C D

5 3 9 8
2 5
E F
51
Shortest Paths
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Why It Doesn’t Work for Negative-
Weight Edges
Dijkstra’s algorithm is based on the greedy
method. It adds vertices by increasing distance.

0
– If a node with a negative 8 A 4
incident edge were to be 6
added late to the cloud, it 7 7 5 1 4
B C D
could mess up distances
for vertices already in the 5 0 -8 9
2 5
cloud. E F

C’s true distance is 1, but it is already


in the cloud with d(C)=5!

52

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Dijkstra’s Algorithm
If no negative edge weights
Similar to breadth-first search
– Grow a tree gradually, advancing from vertices taken from a queue

Also similar to Prim’s algorithm for MST


– Use a priority queue keyed on d[v]

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


The Fractional Knapsack Problem
Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
Goal: Choose items with maximum total benefit but with
weight at most W.
If we are allowed to take fractional amounts, then this is the
fractional knapsack problem.
– In this case, we let xi denote the amount we take of item i

– Objective: maximize

b (x / w )
iS
i i i
– Constraint:

x
iS
i W
54
The Greedy Method
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example
Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”

Solution:
• 1 ml of 5
Items:
1 2 3 4 5
• 2 ml of 3
• 6 ml of 4
Weight: 4 ml 8 ml 2 ml 6 ml 1 ml • 1 ml of 2
Benefit: $12 $32 $40 $30 $50 10 ml
Value: 3 4 20 5 50
55
($ per ml)
The Greedy Method
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
The Fractional Knapsack Algorithm

Algorithm fractionalKnapsack(S, W)
Input: set S of items w/ benefit bi and weight wi;
max. weight W
Output: amount xi of each item i to maximize benefit
with weight at most W
for each item i in S
xi  0
vi  bi / wi {value}
w0 {total weight}
while w < W
remove item i with highest vi
xi  min{wi , W − w}
w  w + min{wi , W − w}

56
The Greedy Method
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example
Objects : 1 2 3 4 5 6 7
Profit: 5 10 15 7 8 9 4
Weight: 1 3 5 4 1 3 2

Total Weight or Capacity W=15


Number of objects or item n=7

57
The Greedy Method
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example
Objects : 1 2 3 4 5 6 7
Profit: 5 10 15 7 8 9 4
Weight: 1 3 5 4 1 3 2
Total Weight or Capacity W=15
Number of objects or item n=7

Objects Profit Weight Remaining Weight

58
The Greedy Method
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example
Objects : 1 2 3 4 5 6 7
Profit: 5 10 15 7 8 9 4
Weight: 1 3 5 4 1 3 2
Total Weight or Capacity W=15
Number of objects or item n=7

Objects Profit Weight Remaining Weight

59
The Greedy Method
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example
Objects : 1 2 3 4 5 6 7
Profit: 5 10 15 7 8 9 4
Weight: 1 3 5 4 1 3 2
Total Weight or Capacity W=15
Number of objects or item n=7

Objects Profit Weight Remaining Weight

60
The Greedy Method
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Example

Item Weight Value


1 5 30
2 10 40
3 15 45
4 22 77
5 25 90

61

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Item Weight Value
1 5 30
2 10 40
3 15 45
4 22 77
5 25 90

Objects Profit Weight Remaining Weight

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Item Weight Value
1 5 30
2 10 40
3 15 45
4 22 77
5 25 90

Objects Profit Weight Remaining Weight

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Item Weight Value
1 5 30
2 10 40
3 15 45
4 22 77
5 25 90

Objects Profit Weight Remaining Weight

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You!!

65

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Course Name :
Algorithm Design
BITS Pilani Gnanavel R
Pilani Campus
What is a Graph?
• A graph G = (V,E) is composed of:
V: set of vertices
E: set of edges connecting the vertices in V
• An edge e = (u,v) is a pair of vertices
• Example:

a b V= {a,b,c,d,e}

E= {(a,b),(a,c),(a,d),
(b,e),(c,d),(c,e),(d,e)}
c

d e
BITS Pilani, Pilani Campus
Graph-Example
• Example:
– A vertex represents an airport and stores the three-
letter airport code
– An edge represents a flight route between two
airports and stores the mileage of the route

PVD
ORD
SFO
LGA

HNL
LAX
DFW
MIA

BITS Pilani, Pilani Campus


A “Real-life” Example of a Graph
• V=set of 6 people: John, Mary, Joe, Helen,
Tom, and Paul, of ages 12, 15, 12, 15, 13,
and 13, respectively.
• E ={(x,y) | if x is younger than y}
Mary Helen

Joe
John

Tom Paul

BITS Pilani, Pilani Campus


Applications
cslab1a cslab1b

• Electronic circuits math.brown.edu


– Printed circuit board
– Integrated circuit cs.brown.edu
• Transportation networks
– Highway network brown.edu
– Flight network qwest.net
att.net
• Computer networks
– Local area network
– Internet cox.net
– Web John

• Databases Paul
David

– Entity-relationship diagram

BITS Pilani, Pilani Campus


Edge Types

• Directed edge
– ordered pair of vertices (u,v) flight
– first vertex u is the origin ORD PVD
– second vertex v is the destination AA 1206
– e.g., a flight
• Undirected edge 849
– unordered pair of vertices (u,v) ORD PVD
– e.g., a flight route miles
• Directed graph (Digraph)
– all the edges are directed
– e.g., flight network
• Undirected graph
– all the edges are undirected
– e.g., route network

BITS Pilani, Pilani Campus


Terminology

• If (v0, v1) is an edge in an undirected graph,


– v0 and v1 are adjacent
– The edge (v0, v1) is incident on vertices v0 and v1
• If <v0, v1> is an edge in a directed graph
– v0 is adjacent to v1, and v1 is adjacent from v0
– The edge <v0, v1> is incident on v0 and v1

7
BITS Pilani, Pilani Campus
Terminology:

• Degree of a Vertex
The degree of a vertex is the number of edges
incident to that vertex
• For directed graph,
• the in-degree of a vertex v is the number of edges
that have v as the head
• the out-degree of a vertex v is the number of edges
that have v as the tail
• if di is the degree of a vertex i in a graph G with n vertices and e edges,
the number of edges is
n −1 Why? Since adjacent vertices each
e=(  0
di ) / 2 count the adjoining edge, it will be
counted twice

BITS Pilani, Pilani Campus


Examples
0
3
2
0 1 2
3 3
3 1 2 3 3 4 5 6
3G 1 1
3
1 G2 1
G1 0 in:1, out: 1

1 in: 1, out: 2

2 in: 1, out: 0

G3 directed graph
BITS Pilani, Pilani Campus
Terminology (cont.)

• Path
– sequence of alternating vertices
and edges
– begins with a vertex V
a b
– ends with a vertex P1
– each edge is preceded and d
U X Z
followed by its endpoints P2 h
• Simple path c e
– path such that all its vertices and W g
edges are distinct
• Examples f
– P1=(V,b,X,h,Z) is a simple path Y
– P2=(U,c,W,e,X,g,Y,f,W,d,V) is a path
that is not simple

BITS Pilani, Pilani Campus


Terminology (cont.)

• Cycle
– circular sequence of
alternating vertices and edges V
a b
– each edge is preceded and
followed by its endpoints d
U X Z
• Simple cycle C2 h
– cycle such that all its vertices c
e C1
and edges are distinct W g
• Examples
– C1=(V,b,X,g,Y,f,W,c,U,a,) is a f
simple cycle Y

– C2=(U,c,W,e,X,g,Y,f,W,d,V,a,) is
a cycle that is not simple

BITS Pilani, Pilani Campus


Properties

Property 1 Notation
v deg(v) = 2m n number of vertices
Proof: m number of edges
each edge is counted
deg(v) degree of vertex v
twice
Property 2 Example
In an undirected graph ◼ n=4
with no self-loops and no ◼ m=6
multiple edges ◼ deg(v) = 3
m  n (n − 1)2
Proof:
each vertex has degree at
most (n − 1)

Graphs 12
BITS Pilani, Pilani Campus
Even More Terminology

•connected graph: any two vertices are connected by some path

connected not connected


• subgraph: subset of vertices and edges forming a graph
• connected component: maximal connected subgraph. E.g., the graph below
has 3 connected components.

BITS Pilani, Pilani Campus


Subgraph Examples
0
0 0 1 2 0

1 2
1 2 3 1 2

G1 3 (i) (ii) (iii) (iv) 3


Some of the subgraph of G1
0
0 0 0 0

1 1 1 1
G3
2 (i) (ii) (iii) 2 (iv) 2
Some of the subgraph of G3

BITS Pilani, Pilani Campus


Trees
• Tree is a special case of a graph. Each node has
zero or more child nodes, which are below it in
the tree.
• A tree is a connected acyclic graph
• Already seen.
• Forest - collection of trees

BITS Pilani, Pilani Campus


Graph Representations

• Adjacency Matrix
• Adjacency Lists

BITS Pilani, Pilani Campus


Adjacency Matrix

• Let G=(V,E) be a graph with n vertices.


• The adjacency matrix of G is a two-dimensional
n by n array, say adj_mat
• If the edge (vi, vj) is in E(G), adj_mat[i][j]=1
• If there is no such edge in E(G), adj_mat[i][j]=0
• The adjacency matrix for an undirected graph is
symmetric; the adjacency matrix for a digraph
need not be symmetric

BITS Pilani, Pilani Campus


Examples for Adjacency Matrix
0
0 0 4
1 2 G2 2 1 5
G1 3 6
1 G4
3 0 1 0
  7
 1 0 1 
0 1 1 1 0 1 1 0 0 0 0 0
1 0 0 0
 0 1 1  2 1 0 0 1 0 0 0 0

1 1 0 1 1 0 0 1 0 0 0 0
   
1 1 1 0 0 1 1 0 0 0 0 0
0 0 0 0 0 1 0 0
 
symmetric 0 0 0 0 1 0 1 0
0 0 0 0 0 1 0 1
 
0 0 0 0 0 0 1 0

BITS Pilani, Pilani Campus


Adjacency Matrix Example

2
1 2 3 4 5 6 7
1 3
1 0 1 0 0 1 1 0
2 1 0 1 0 0 0 1
3 0 1 0 1 0 0 0 7
4
0 0 1 0 1 0 1
5 4
1 0 0 1 0 1 1
6
1 0 0 0 1 0 0
7 6
0 1 0 1 1 0 0
5

19
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Merits of Adjacency Matrix
• From the adjacency matrix, to determine the
connection of vertices is easy
n −1
• The degree of a vertex is  adj _ mat[i][ j ]
j =0

• For a digraph the row sum is the out_degree, while the


column sum is the in_degree
n −1 n −1
ind (vi ) =  A[ j , i ] outd (vi ) =  A[i , j ]
j =0 j =0

Cons : No matter how few edges the graph has, the


matrix takes O(n2) in memory
BITS Pilani, Pilani Campus
Adjacency Lists Representation

• A graph of n nodes is represented by a one-


dimensional array L of linked lists, where
– L[i] is the linked list containing all the nodes
adjacent from node i.
– The nodes in the list L[i] are in no particular order

BITS Pilani, Pilani Campus


0
0 G1
1 2 3
1 0 2 3 1 2
2 0 1 3 3
3 0 1 2
0
An undirected graph with n vertices
0 1 and e edges ==> n head nodes and 2e
1 0 2 1 list nodes
2 G3
2
BITS Pilani, Pilani Campus
Adjacency List Example

2
1 1 3
2 5 6
2 3 1 7
3
4
2 4 7
3 7 5
5
6 6 1 7 4 4
7 1 5
4 5 2 6
5

23
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Pros and Cons of Adjacency Lists

• Pros
– Saves on space (memory): the representation
takes as many memory words as there are nodes
and edge.
• Cons
– It can take up to O(n) time to determine if a pair of
nodes (i,j) is an edge: one would have to search
the linked list L[i], which takes time proportional
to the length of L[i].

CS 103 24
BITS Pilani, Pilani Campus
Adjacency Matrix: Pros and Cons
advantages
– fast to tell whether edge exists between any two vertices i and j (and
to get its weight)

disadvantages
– consumes a lot of memory on sparse graphs (ones with few edges)
– redundant information for undirected graphs

25
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Main Methods of the Graph ADT
Vertices and edges Update methods
– are positions – insertVertex(o)
– store elements – insertEdge(v, w, o)
Accessor methods – insertDirectedEdge(v, w, o)
– aVertex() – removeVertex(v)
– incidentEdges(v) – removeEdge(e)
– endVertices(e) Generic methods
– isDirected(e) – numVertices()
– origin(e) – numEdges()
– destination(e) – vertices()
– opposite(v, e) – edges()
– areAdjacent(v, w)

26
Graphs
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Graph traversals

• Graph traversal means visiting every vertex and edge exactly once
in a well-defined order.
• During a traversal, it is important that you track which vertices have
been visited.

Breadth First Search (BFS)


There are many ways to traverse graphs. BFS is the most commonly
used approach.
BFS is a traversing algorithm where you should start traversing from a
selected node (source or starting node) and traverse the graph
layerwise thus exploring the neighbour nodes (nodes which are
directly connected to source node).
As the name BFS suggests, you are required to traverse the graph
breadthwise as follows:
• First move horizontally and visit all the nodes of the current layer
• Move to the next layer
27
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Use of a queue

• It is very common to use a queue to keep


track of:
– nodes to be visited next, or
– nodes that we have already visited.
• Typically, use of a queue leads to a breadth-
first visit order.
• Breadth-first visit order is “cautious” in the
sense that it examines every path of length i
before going on to paths of length i+1.
28
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
BFS
Pseudocode

BFS (G, s) //Where G is the graph and s is the source node


let Q be queue.
Q.enqueue( s ) //Inserting s in queue until all its neighbour vertices are marked.

mark s as visited.
while ( Q is not empty)
//Removing that vertex from queue,whose neighbour will be visited now
v = Q.dequeue( )

//processing all the neighbours of v


for all neighbours w of v in Graph G
if w is not visited
Q.enqueue( w ) //Stores w in Q to further visit its neighbour
mark w as visited.

29
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Breadth-First Search
Breadth-first search (BFS) is a BFS on a graph with n vertices
general technique for and m edges takes O(n + m
traversing a graph ) time
A BFS traversal of a graph G
BFS can be further extended to
– Visits all the vertices and
edges of G solve other graph problems
– Determines whether G is – Find and report a path with
connected the minimum number of
– Computes the connected edges between two given
components of G vertices
– Computes a spanning forest – Find a simple cycle, if there
of G is one

30
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
BFS Algorithm
The algorithm uses a mechanism for Algorithm BFS(G, s)
setting and getting “labels” of L0  new empty sequence
vertices and edges L0.insertLast(s)
setLabel(s, VISITED)
i0
Algorithm BFS(G) while Li.isEmpty()
Input graph G Li +1  new empty sequence
Output labeling of the edges for all v  Li.elements()
and partition of the for all e  G.incidentEdges(v)
vertices of G if getLabel(e) = UNEXPLORED
for all u  G.vertices() w  opposite(v,e)
setLabel(u, UNEXPLORED) if getLabel(w) = UNEXPLORED
for all e  G.edges() setLabel(e, DISCOVERY)
setLabel(w, VISITED)
setLabel(e, UNEXPLORED)
Li +1.insertLast(w)
for all v  G.vertices() else
if getLabel(v) = UNEXPLORED setLabel(e, CROSS)
BFS(G, v) i  i +1
31
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Example
L0
unexplored vertex A
A
A visited vertex L1
B C D
unexplored edge
discovery edge E F
cross edge

L0 L0
A A

L1 L1
B C D B C D

E F E F
32
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Example (cont.)
L0 L0
A A

L1 L1
B C D B C D

L2
E F E F

L0 L0
A A

L1 L1
B C D B C D

L2 L2
E F E F
33
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Example (cont.)
L0 L0
A A

L1 L1
B C D B C D

L2 L2
E F E F

L0
A

L1
B C D

L2
E F
34
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Properties
Notation A
Gs: connected component of s

B C D
Property 1
BFS(G, s) visits all the vertices and edges of Gs
E F
Property 2
The discovery edges labeled by BFS(G, s) form a spanning tree Ts of
Gs
L0
A

L1
B C D

L2
E F
35
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Analysis
Setting/getting a vertex/edge label takes O(1) time
Each vertex is labeled twice
– once as UNEXPLORED
– once as VISITED
Each edge is labeled twice
– once as UNEXPLORED
– once as DISCOVERY or CROSS
Each vertex is inserted once into a sequence Li
Method incidentEdges is called once for each vertex
BFS runs in O(n + m) time provided the graph is
represented by the adjacency list structure
– Recall that v deg(v) = 2m

36
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Applications
We can specialize the BFS traversal of a graph G
to solve the following problems in O(n + m) time
– Compute the connected components of G
– Compute a spanning forest of G
– Find a simple cycle in G, or report that G is a forest
– Given two vertices of G, find a path in G between
them with the minimum number of edges, or report
that no such path exists

37
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Depth First Search (DFS)

The DFS algorithm is a recursive algorithm that uses the idea of


backtracking.

Here, the word backtrack means that when you are moving
forward and there are no more nodes along the current path,
you move backwards on the same path to find nodes to
traverse.
All the nodes will be visited on the current path till all the
unvisited nodes have been traversed after which the next path
will be selected.

This recursive nature of DFS can be implemented using stacks.

38
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Use of a stack

• It is very common to use a stack to keep


track of:
– nodes to be visited next, or
– nodes that we have already visited.
• Typically, use of a stack leads to a depth-
first visit order.
• Depth-first visit order is “aggressive” in the
sense that it examines complete paths.
39
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Depth-First Search
Depth-first search (DFS) is a DFS on a graph with n vertices
general technique for and m edges takes O(n + m
traversing a graph ) time
A DFS traversal of a graph G DFS can be further extended to
– Visits all the vertices and solve other graph problems
edges of G – Find and report a path
– Determines whether G is between two given vertices
connected – Find a cycle in the graph
– Computes the connected
components of G
– Computes a spanning forest
of G

40
Depth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
DFS Algorithm
The algorithm uses a mechanism for
setting and getting “labels” of Algorithm DFS(G, v)
vertices and edges Input graph G and a start vertex v of G
Algorithm DFS(G) Output labeling of the edges of G
Input graph G in the connected component of v
as discovery edges and back edges
Output labeling of the edges of G
as discovery edges and setLabel(v, VISITED)
back edges for all e  G.incidentEdges(v)
for all u  G.vertices() if getLabel(e) = UNEXPLORED
setLabel(u, UNEXPLORED) w  G.opposite(v,e)
for all e  G.edges() if getLabel(w) = UNEXPLORED
setLabel(e, UNEXPLORED) setLabel(e, DISCOVERY)
for all v  G.vertices() DFS(G, w)
if getLabel(v) = UNEXPLORED else
DFS(G, v) setLabel(e, BACK)
41
Depth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Example
A unexplored vertex A

A visited vertex
B D E
unexplored edge
discovery edge
C
back edge

A A

B D E B D E

C C
42
Depth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Example (cont.)
A A

B D E B D E

C C

A A

B D E B D E

C C
43
Depth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Properties of DFS
Property 1
DFS(G, v) visits all the vertices and edges in the
connected component of v
Property 2
The discovery edges labeled by DFS(G, v) form a
spanning tree of the connected component of v
A

B D E

C
44
Depth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Analysis of DFS
Setting/getting a vertex/edge label takes O(1) time
Each vertex is labeled twice
– once as UNEXPLORED
– once as VISITED
Each edge is labeled twice
– once as UNEXPLORED
– once as DISCOVERY or BACK
Method incidentEdges is called once for each vertex
DFS runs in O(n + m) time provided the graph is
represented by the adjacency list structure
– Recall that v deg(v) = 2m

45
Depth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Graph Traversals

•Both take time: O(V+E)

46
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
DFS with Timestamp
DFS(G)
1. for each vertex u ∈ G.V
2. u.color = WHITE
3. u.pi = NIL
4. time = 0
5. for each vertex u ∈ G.V
6. if u.color == WHITE
7. DFS-VISIT(G,u)

DFS-VISIT(G,u)
1. time = time + 1
2. u.d = time
3. u.color = GRAY
4. for each v ∈ G.Adj[u]
5. if v.color == WHITE
6. v.pi = u
7. DFS-VISIT(G,v)
8. u.color = BLACK
9. time = time + 1
10. u.f = time

47
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
DFS vs. BFS

Applications DFS BFS


Spanning forest, connected
 
components, paths, cycles
Shortest paths 

Biconnected components 

L0
A A

L1
B C D B C D

L2
E F E F
DFS BFS
48
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
DFS vs. BFS (cont.)
Back edge (v,w) Cross edge (v,w)
– w is an ancestor of v in the – w is in the same level as v
tree of discovery edges or in the next level in the
tree of discovery edges

L0
A A

L1
B C D B C D

L2
E F E F
DFS BFS
49
11/7/2024 6:33 AM Breadth-First Search
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Strong connectivity in O(n(n+m))

• By repeatedly traversing digraph G with a DFS, starting


in turn at each vertex, we can easily test whether G is
strongly connected.
• There fore, G is strongly connected if each DFS. Visits
all the vertices of G.
• DFS From each vertex O(n+m)
• From all n-vertices O(n(n+m))

50
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Strong Connectivity
Algorithm
Pick a vertex v in G.
a
Perform a DFS from v in G. G: g
c
– If there’s a w not visited, print
“no”. d
e
Let G’ be G with edges reversed.
b
Perform a DFS from v in G’. f

– If there’s a w not visited, print


“no”. a
– Else, print “yes”. G’: c
g

d
e
Running time: O(n+m). b
f

51
Directed Graphs
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
BFS

52
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
DFS

53
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
TOPOLOGICAL SORTING

54
BITS Pilani, Deemed to be University under Section
BITS Pilani,
3 of UGC
PilaniAct,
Campus
1956
Algorithm Design
Gnanavel R

BITS Pilani Asst. Prof


Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

CS3: Divide and Conquer

2
Divide-and-Conquer
Divide-and conquer is a general algorithm design
paradigm:
– Divide: divide the input data S in two or more disjoint subsets S1,
S2, …
– Recur: solve the subproblems recursively
– Conquer: combine the solutions for S1, S2, …, into a solution for S

Analysis can be done using recurrence equations

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Binary Search

Binary search compares the target value to the middle element of the array. If
they are not equal, the half in which the target cannot lie is eliminated and the
search continues on the remaining half

Class Search algorithm


Data structure Array
Worst-case performance O(log n)
Best-case performance O(1)
Average performance O(log n)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Binary Search(Iterative)
Algorithm BINARY_SEARCH(A, lower_bound, upper_bound, VAL)
Step 1: [INITIALIZE] SET BEG = lower_bound END = upper_bound, POS = - 1
Step 2: Repeat Steps 3 and 4 while BEG <=END
Step 3: SET MID = (BEG + END)/2
Step 4: IF A[MID] = VAL
SET POS = MID
PRINT POS
Go to Step 6
ELSE IF A[MID] > VAL
SET END = MID - 1
ELSE
SET BEG = MID + 1
[END OF IF]
[END OF LOOP]
Step 5: IF POS = -1
PRINT "VALUE IS NOT PRESENT IN THE ARRAY"
[END OF IF]
Step 6: EXIT
5

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Binary Search(Recursive)
if low > high
/***Solve the base case:****/

return( -1 ); // Not found


else
/****Solve a non-trivial binary search */
middle = (low + high)/2;
if ( x = A[middle] )
// Found x, return location in array
else if ( x < A[middle] )
BinSearch( x, A, low, middle−1 ); // Solve smaller problem
else // x > A[middle]
BinSearch( x, A, middle+1, high ); // Solve smaller problem

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Merge-Sort Review
Merge-sort on an input sequence S with n elements
consists of three steps:
– Divide: partition S into two sequences S1 and S2 of about n2
elements each
– Recur: recursively sort S1 and S2
– Conquer: merge S1 and S2 into a unique sorted sequence

7
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Merge Sort
MergeSort(A, left, right) {
if (left < right) {
mid = floor((left + right) / 2);
MergeSort(A, left, mid);
MergeSort(A, mid+1, right);
Merge(A, left, mid, right);
}
}

// Merge() takes two sorted subarrays of A and


// merges them into a single sorted subarray of A
// (how long should this take?)
David Luebke

8
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Merging Two Sorted Sequences
The conquer step of merge-sort Algorithm merge(A, B)
consists of merging two sorted Input sequences A and B with
sequences A and B into a n2 elements each
sorted sequence S containing Output sorted sequence of A  B
the union of the elements of A
and B S  empty sequence
Merging two sorted sequences, while A.isEmpty()  B.isEmpty()
each with n2 elements and if A.first().element() < B.first().element()
implemented by means of a S.insertLast(A.remove(A.first()))
doubly linked list, takes O(n) else
time S.insertLast(B.remove(B.first()))
while A.isEmpty()
S.insertLast(A.remove(A.first()))
while B.isEmpty()
S.insertLast(B.remove(B.first()))
return S
9

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Merge-Sort Tree
An execution of merge-sort is depicted by a binary tree
– each node represents a recursive call of merge-sort and stores
• unsorted sequence before the execution and its partition
• sorted sequence at the end of the execution
– the root is the initial call

7 29 4 → 2 4 7 9

72 → 2 7 94 → 4 9

7→7 2→2 9→9 4→4


10
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example
Partition

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 2 9 4 → 2 4 7 9 3 8 6 1 → 1 3 8 6

7 2 → 2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


11
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Recursive call, partition

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 8 6

7 2 → 2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


12
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Recursive call, partition

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 8 6

72→2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


13
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Recursive call, base case

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 8 6

72→2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


14
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Recursive call, base case

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 8 6

72→2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


15
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Merge

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 8 6

72→2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


16
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Recursive call, …, base case, merge

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 8 6

72→2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


17
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Merge

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 8 6

72→2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


18
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Recursive call, …, merge, merge

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 6 8

72→2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


19
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Merge

7 2 9 43 8 6 1 → 1 2 3 4 6 7 8 9

7 29 4→ 2 4 7 9 3 8 6 1 → 1 3 6 8

72→2 7 9 4 → 4 9 3 8 → 3 8 6 1 → 1 6

7→7 2→2 9→9 4→4 3→3 8→8 6→6 1→1


20
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Analysis of Merge Sort
Statement Effort

MergeSort(A, left, right) { T(n)


if (left < right) { (1)
mid = floor((left + right) / 2); (1)
MergeSort(A, left, mid); T(n/2)
MergeSort(A, mid+1, right); T(n/2)
Merge(A, left, mid, right); (n)
}
}
So T(n) = (1) when n = 1, and
2T(n/2) + (n) when n > 1
So what is T(n)?

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Recurrence Equation Analysis
The conquer step of merge-sort consists of merging two sorted
sequences, each with n2 elements and implemented by means of a
doubly linked list, takes at most bn steps, for some constant b.
Likewise, the basis case (n < 2) will take at b most steps.
Therefore, if we let T(n) denote the running time of merge-sort:

 b if n  2
T (n) = 
2T (n / 2) + bn if n  2
We can therefore analyze the running time of merge-sort by finding a
solution to the above equation.
– That is, a solution that has T(n) only on the left-hand side.

22

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Iterative Substitution
In the iterative substitution, or “plug-and-chug,” technique, we iteratively
apply the recurrence equation to itself and see if we can find a pattern:

T (n ) = 2T (n / 2) + bn
= 2(2T (n / 22 )) + b(n / 2)) + bn
= 22 T (n / 22 ) + 2bn
= 23 T (n / 23 ) + 3bn
Note that base, T(n)=b, case occurs
when 2i=n. That is, i = log n. = 24 T (n / 24 ) + 4bn
So, = ...
Thus, T(n) is O(n log n). = 2i T (n / 2i ) + ibn
T ( n ) = bn + bn log n
23
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Analysis of Merge-Sort
The height h of the merge-sort tree is O(log n)
– at each recursive call we divide in half the sequence,
The overall amount or work done at the nodes of depth i is O(n)
– we partition and merge 2i sequences of size n2i
– we make 2i+1 recursive calls
Thus, the total running time of merge-sort is O(n log n)

depth #seqs size


0 1 n

1 2 n2

i 2i n2i

… … …
24
Merge Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Quick-Sort
Quick-sort is a randomized
sorting algorithm based
x
on the divide-and-
conquer paradigm:
– Divide: pick a random
element x (called pivot) and
partition S into x
• L elements less than x
• E elements equal x L E G
• G elements greater than x
– Recur: sort L and G
– Conquer: join L, E and G x

25
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Quicksort
Another divide-and-conquer algorithm
– The array A[p..r] is partitioned into two non-empty subarrays A[p..q] and A[q+1..r]

• Invariant: All elements in A[p..q] are less than all


elements in A[q+1..r]
– The subarrays are recursively sorted by calls to quicksort
– Unlike merge sort, no combining step: two subarrays form an already-sorted
array

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Quicksort

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Partition
We partition an input sequence as Algorithm partition(S, p)
follows: Input sequence S, position p of pivot
– We remove, in turn, each Output subsequences L, E, G of the
elements of S less than, equal to,
element y from S and
or greater than the pivot, resp.
– We insert y into L, E or G, L, E, G  empty sequences
depending on the result of the
x  S.remove(p)
comparison with the pivot x
while S.isEmpty()
Each insertion and removal is at y  S.remove(S.first())
the beginning or at the end of a if y < x
sequence, and hence takes L.insertLast(y)
O(1) time else if y = x
Thus, the partition step of quick- E.insertLast(y)
sort takes O(n) time else { y > x }
G.insertLast(y)
return L, E, G
28
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Quick-Sort Tree
An execution of quick-sort is depicted by a binary tree
– Each node represents a recursive call of quick-sort and stores
• Unsorted sequence before the execution and its pivot
• Sorted sequence at the end of the execution
– The root is the initial call
– The leaves are calls on subsequences of size 0 or 1

7 4 9 6 2 → 2 4 6 7 9

4 2 → 2 4 7 9 → 7 9

2→2 9→9
29
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example
Pivot selection

7 2 9 43 7 6 1 → 1 2 3 4 6 7 8 9

7 2 9 4 → 2 4 7 9 3 8 6 1 → 1 3 8 6

2→2 9 4 → 4 9 3→3 8→8

9→9 4→4
30
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Partition, recursive call, pivot selection

7 2 9 4 3 7 6 1→ 1 2 3 4 6 7 8 9

2 4 3 1→ 2 4 7 9 3 8 6 1 → 1 3 8 6

2→2 9 4 → 4 9 3→3 8→8

9→9 4→4
31
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Partition, recursive call, base case

7 2 9 43 7 6 1→→ 1 2 3 4 6 7 8 9

2 4 3 1 →→ 2 4 7 3 8 6 1 → 1 3 8 6

1→1 9 4 → 4 9 3→3 8→8

9→9 4→4
32
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Recursive call, …, base case, join

7 2 9 43 7 6 1→ 1 2 3 4 6 7 8 9

2 4 3 1 → 1 2 3 4 3 8 6 1 → 1 3 8 6

1→1 4 3 → 3 4 3→3 8→8

9→9 4→4
33
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Recursive call, pivot selection

7 2 9 43 7 6 1→ 1 2 3 4 6 7 8 9

2 4 3 1 → 1 2 3 4 7 9 7 1 → 1 3 8 6

1→1 4 3 → 3 4 8→8 9→9

9→9 4→4
34
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Partition, …, recursive call, base case

7 2 9 43 7 6 1→ 1 2 3 4 6 7 8 9

2 4 3 1 → 1 2 3 4 7 9 7 1 → 1 3 8 6

1→1 4 3 → 3 4 8→8 9→9

9→9 4→4
35
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Execution Example (cont.)
Join, join

7 2 9 4 3 7 6 1 →1 2 3 4 6 7 7 9

2 4 3 1 → 1 2 3 4 7 9 7 → 17 7 9

1→1 4 3 → 3 4 8→8 9→9

9→9 4→4
36
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Quicksort Code
Quicksort(A, p, r)
{
if (p < r)
{
q = Partition(A, p, r);
Quicksort(A, p, q);
Quicksort(A, q+1, r);
}
}
David Luebke

37
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Partition
Clearly, all the action takes place in the partition()
function
– Rearranges the subarray in place
– End result:

• Two subarrays
• All values in first subarray  all values in second
– Returns the index of the “pivot” element separating the two subarrays

David Luebke

38
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Partition Code
Partition(A, p, r)
x = A[p];
i = p - 1; Illustrate on
j = r + 1; A = {5, 3, 2, 6, 4, 1, 3, 7};
while (TRUE)
repeat
j--;
until A[j] <= x;
repeat
i++; What is the running time of
until A[i] >= x; partition()?
if (i < j)
Swap(A, i, j);
else
return j;

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


In-Place Partitioning
Perform the partition using two indices to split S into L and
E U G (a similar method can split E U G into E and G).
j k

3 2 5 1 0 7 3 5 9 2 7 9 8 9 7 6 9 (pivot = 6)

Repeat until j and k cross:


– Scan j to the right until finding an element > x.
– Scan k to the left until finding an element < x.
– Swap elements at indices j and k
j k

3 2 5 1 0 7 3 5 9 2 7 9 8 9 7 6 9

40
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Worst-case Running Time
The worst case for quick-sort occurs when the pivot is the unique
minimum or maximum element
One of L and G has size n − 1 and the other has size 0
The running time is proportional to the sum
n + (n − 1) + … + 2 + 
Thus, the worst-case running time of quick-sort is O(n2)
depth time
0 n

1 n−1

… …

n−1 1 41
Quick-Sort
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Expected Running Time
Consider a recursive call of quick-sort on a sequence of size s
– Good call: the sizes of L and G are each less than 3s4
– Bad call: one of L and G has size greater than 3s4

7 2 9 43 7 6 1 9 7 2 9 43 7 6 1

2 4 3 1 7 9 7 1 → 1 1 7 2 9 43 7 6

Good call Bad call


A call is good with probability 12
– 1/2 of the possible pivots cause good calls:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Bad pivots Good pivots Bad pivots


42

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Analyzing Quicksort
What will be the worst case for the algorithm?
– Partition is always unbalanced

What will be the best case for the algorithm?


– Partition is perfectly balanced

Will any particular input elicit the worst case?


– Yes: Already-sorted input

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Analyzing Quicksort
In the worst case:
T(1) = (1)
T(n) = T(n - 1) + (n)

Works out to
T(n) = (n2)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Analyzing Quicksort
In the best case:
T(n) = 2T(n/2) + (n)

What does this work out to?


T(n) = (n lg n)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Improving Quicksort
The real liability of quicksort is that it runs in O(n2) on
already-sorted input
two solutions:
– Randomize the input array, OR
– Pick a random pivot element

How will these solve the problem?


– By insuring that no particular input can be chosen to make quicksort run in O(n 2)
time

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Analyzing Quicksort: Average Case
Intuitively, a real-life run of quicksort will produce a mix of
“bad” and “good” splits
– Randomly distributed among the recursion tree
– Pretend for intuition that they alternate between best-case (n/2 : n/2) and worst-
case (n-1 : 1)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Analyzing Quicksort: Average Case
For simplicity, assume:
– All inputs distinct (no repeats)
– Slightly different partition() procedure

• partition around a random element, which is not


included in subarrays
• all splits (0:n-1, 1:n-2, 2:n-3, … , n-1:0) equally
likely
What is the probability of a particular split happening?
Answer: 1/n

David Luebke

48
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Integer Multiplication

So, T(n) = 4T(n/2) + n, which implies T(n) is O(n2).


But that is no better than the algorithm we learned in grade school.
Although the expression for xy seems to demand four n/2-bit multiplications, as
before just three will do:

So, T(n) = 3T(n/2) + n, which implies T(n) is O(nlog23), by the Master Theorem.
Thus, T(n) is O(n1.585). 49

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You!!

50

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm Design
Gnanavel R
Assistant Professor
BITS-PILAN
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

Analysing recursive algorithms

2
Example 3:Merge sort

MERGE-SORT A[1 . . n]
1. If n = 1, done.
2. Recursively sort A[ 1 . . n/2 ]
and A[ n/2+1 . . n ] .
3. “Merge” the 2 sorted lists.

Key subroutine: MERGE

L1.3
Merging two sorted arrays

20 12
13 11
7 9
2 1

L1.4
Merging two sorted arrays

20 12
13 11
7 9
2 1

L1.5
Merging two sorted arrays

20 12 20 12
13 11 13 11
7 9 7 9
2 1 2

L1.6
Merging two sorted arrays

20 12 20 12
13 11 13 11
7 9 7 9
2 1 2

1 2

L1.7
Merging two sorted arrays

20 12 20 12 20 12
13 11 13 11 13 11
7 9 7 9 7 9
2 1 2

1 2

L1.8
Merging two sorted arrays

20 12 20 12 20 12
13 11 13 11 13 11
7 9 7 9 7 9
2 1 2

1 2 7

L1.9
Merging two sorted arrays

20 12 20 12 20 12 20 12
13 11 13 11 13 11 13 11
7 9 7 9 7 9 9
2 1 2

1 2 7

L1.10
Merging two sorted arrays

20 12 20 12 20 12 20 12
13 11 13 11 13 11 13 11
7 9 7 9 7 9 9
2 1 2

1 2 7 9

L1.11
Merging two sorted arrays

20 12 20 12 20 12 20 12 20 12
13 11 13 11 13 11 13 11 13 11
7 9 7 9 7 9 9
2 1 2

1 2 7 9

L1.12
Merging two sorted arrays

20 12 20 12 20 12 20 12 20 12
13 11 13 11 13 11 13 11 13 11
7 9 7 9 7 9 9
2 1 2

1 2 7 9 11

L1.13
Merging two sorted arrays

20 12 20 12 20 12 20 12 20 12 20 12
13 11 13 11 13 11 13 11 13 11 13
7 9 7 9 7 9 9
2 1 2

1 2 7 9 11

L1.14
Merging two sorted arrays

20 12 20 12 20 12 20 12 20 12 20 12
13 11 13 11 13 11 13 11 13 11 13
7 9 7 9 7 9 9
2 1 2

1 2 7 9 11 12

L1.15
Merging two sorted arrays

20 12 20 12 20 12 20 12 20 12 20 12
13 11 13 11 13 11 13 11 13 11 13
7 9 7 9 7 9 9
2 1 2

1 2 7 9 11 12

Time = (n) to merge a total of n elements (linear


time).

L1.16
Analyzing merge sort

T(n)
(1) MERGE-SORT A[1 . . n]
2T(n/2)
1. If n = 1, done.
(n)
2. Recursively sort A[ 1 . . n/2 ]
and A[ n/2+1 . . n ] .
3. “Merge” the 2 sorted lists
Should be T( n/2 ) + T( n/2 ) , but it turns out not to matter asymptotically.

L1.17
Recurrence for merge sort
(1) if n = 1;
T(n) =
2T(n/2) + (n) if n > 1.

• We shall usually omit stating the base


case when T(n) = (1) for sufficiently
small n, but only when it has no effect on
the asymptotic solution to the recurrence.
•.

L1.18
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

L1.19
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

T(n)

L1.20
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn

T(n/2) T(n/2)

L1.21
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn

cn/2 cn/2

T(n/4) T(n/4) T(n/4) T(n/4)

L1.22
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn

cn/2 cn/2

cn/4 cn/4 cn/4 cn/4

(1)

L1.23
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn

cn/2 cn/2

h = lg n
cn/4 cn/4 cn/4 cn/4

(1)

L1.24
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn cn

cn/2 cn/2

h = lg n
cn/4 cn/4 cn/4 cn/4

(1)

L1.25
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn cn

cn/2 cn/2 cn

h = lg n
cn/4 cn/4 cn/4 cn/4

(1)

L1.26
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn cn

cn/2 cn/2 cn

h = lg n
cn/4 cn/4 cn/4 cn/4 cn


(1)

L1.27
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn cn

cn/2 cn/2 cn

h = lg n
cn/4 cn/4 cn/4 cn/4 cn


(1) #leaves = n (n)

L1.28
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

cn cn

cn/2 cn/2 cn

h = lg n
cn/4 cn/4 cn/4 cn/4 cn


(1) #leaves = n (n)

Total = (n lg n)

L1.29
Conclusions

• (n lg n) grows more slowly than (n2).


• Therefore, merge sort asymptotically
beats insertion sort in the worst case.
• In practice, merge sort beats insertion
sort for n > 30 or so.

L1.30
Recursive Algorithms

Divide
Recur
Conquer

Example : Merge Sort

31

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Recurrence relations

Recursive Functions
void Test(int n)
{
if (n>0)
{ printf (“%d”,n);
Test (n-1);
}
}
𝑇 𝑛 − 1 + 1, 𝑖𝑓 𝑛 > 0,
T(n) = ቊ
1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
32

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example

Find the complexity of the recurrence:


2𝑇 𝑛 − 1 − 1, 𝑖𝑓 𝑛 > 0,
T(n) = ቊ
1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Sol: let us try solving this function with substitution
T(n) = 2T(n-1)-1
T(n) = 2(2T(n-2)-1)-1= 22T(n-2)-2-1
T(n) = 22[2T(n-3)-1]-2-1 =23T(n-3)-22-21-20
T(n) = 23[2T(n-4)-1]-22-21-20 = 24T(n-4)-23- 22-21-20
T(n) = 2nT(n-n)-2n-1-2n-2-2n-3…..22-21-20
T(n) = 2n-(2n-1) [Note: 2n-1-2n-2-2n-3…..22-21-20 = 2n-1]
T(n) =1.

33

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Recurrence Relations

𝑇 𝑛 − 1 + 1, 𝑖𝑓 𝑛 > 0,
T(n) = ቊ O(n)
1, 𝑛=0

𝑇 𝑛 − 1 + 𝑛, 𝑖𝑓 𝑛 > 0,
T(n)= ቊ O(n2)
1, 𝑛=0

34

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Dividing functions

𝑇 𝑛/2 + 1, 𝑖𝑓 𝑛 > 1,
T(n)= ቊ O(logn)
1, 𝑛=1

𝑇 𝑛/2 + 𝑛, 𝑖𝑓 𝑛 > 1,
T(n)= ቊ
1, 𝑛=1

35

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Recurrence Relations
𝑇 𝑛 − 1 + 1, 𝑖𝑓 𝑛 > 0,
T(n) = ቊ O(n)
1, 𝑛=0
𝑇 𝑛 − 1 + 𝑛, 𝑖𝑓 𝑛 > 0,
T(n)= ቊ O(n2)
1, 𝑛=0

Solve by

1) Recursion Tree
2) Substitution Method
3) Master Theorem

36

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Recurrence Relation

T(n) = aT(n/b)+f(n) where a> = 1, b>1


n = the size of the current problem
a = the number of sub-problems in the recursion.
n/b = the size of each subproblem
f(n) = the cost of work that has to de done outside the recursive calls (Cost of
dividing + merging).

37

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


The Recursion Tree
Draw the recursion tree for the recurrence relation and look for a
pattern:
 b if n  2
T (n) = 
2T (n / 2) + bn if n  2
time
depth T’s size
0 1 n bn

1 2 n2 bn

i 2i n2i bn

… … … …
Total time = bn + bn log n
(last level plus all previous levels)
38
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Dividing functions

𝑇 𝑛/2 + 1, 𝑖𝑓 𝑛 > 1,
T(n)= ቊ O(logn)
1, 𝑛=1

𝑇 𝑛/2 + 𝑛, 𝑖𝑓 𝑛 > 1,
T(n)= ቊ
1, 𝑛=1

39

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Master Method

– A utility method for analysis recurrence relations


– Useful in many cases for divide and conquer algorithms.

– These recurrence relations are of the forms:


T(n) = aT(n/b)+f(n) where a> = 1, b>1
n = the size of the current problem
a = the number of sub-problems in the recursion.
n/b = the size of each subproblem
f(n) = the cost of work that has to de done outside the recursive calls (Cost of
dividing + merging).

40

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Master Method
Many divide-and-conquer recurrence equations have the
form:
 c if n  d
T (n) = 
aT (n / b) + f (n ) if n  d
The Master Theorem:
1. if f (n) is O (n logb a − ), then T (n) is (n logb a )
2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k +1 n)
3. if f (n) is (n logb a + ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
41
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
How to apply the master
method(step by step)
1. Extract a,b and f(n) form a given recurrence
2. Determine nlogba
3. Compare f(n) and nlogba asymptotically.
4. Determine the appropriate Master method case and
apply it

42

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Master Method, Example 1
The form:  c if n  d
T (n) = 
aT (n / b) + f (n ) if n  d
The Master Theorem:
1. if f (n) is O (n logb a − ), then T (n) is (n logb a )
2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k +1 n)
3. if f (n) is (n logb a + ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
Example:
T (n) = 4T (n / 2) + n
Solution: logba=2, so case 1 says T(n) is 𝜃(n2).

43
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Master Method, Example 2
The form:  c if n  d
T (n) = 
aT (n / b) + f (n ) if n  d
The Master Theorem:
1. if f (n) is O (n logb a − ), then T (n) is (n logb a )
2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k +1 n)
3. if f (n) is (n logb a + ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
Example:
T (n) = 2T (n / 2) + n log n
Solution: logba=1, so case 2 says T(n) is 𝜃(n log2 n).

44
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Master Method, Example 3
The form:  c if n  d
T (n) = 
aT (n / b) + f (n ) if n  d
The Master Theorem:
1. if f (n) is O (n logb a − ), then T (n) is (n logb a )
2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k +1 n)
3. if f (n) is (n logb a + ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
Example:
T (n) = T (n / 3) + n log n
Solution: logba=0, so case 3 says T(n) is 𝜃(n log n).

45
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Master Method, Example 4
The form:  c if n  d
T (n) = 
aT (n / b) + f (n ) if n  d
The Master Theorem:
1. if f (n) is O (n logb a − ), then T (n) is (n logb a )
2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k +1 n)
3. if f (n) is (n logb a + ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
Example:
T (n) = 8T (n / 2) + n 2

Solution: logba=3, so case 1 says T(n) is 𝜃(n3).

46
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Master Method, Example 5
The form:  c if n  d
T (n) = 
aT (n / b) + f (n ) if n  d
The Master Theorem:
1. if f (n) is O (n logb a − ), then T (n) is (n logb a )
2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k +1 n)
3. if f (n) is (n logb a + ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
Example:
T (n) = 9T (n / 3) + n 3

Solution: logba=2, so case 3 says T(n) is 𝜃(n3).

47
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Master Method, Example 6
The form:  c if n  d
T (n) = 
aT (n / b) + f (n ) if n  d
The Master Theorem:
1. if f (n) is O (n logb a − ), then T (n) is (n logb a )
2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k +1 n)
3. if f (n) is (n logb a + ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
Example:
T (n) = T (n / 2) + 1 (binary search)

Solution: logba=0, so case 2 says T(n) is 𝜃(log n).

48
Divide-and-Conquer
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
exercise[Ex:-1]

Image that: T(n) = 2T(n/2) +n


Sol:
1. Extract a=2, b=2 and f(n) =n
2. Determine nlogba = nlog22 =n1 =n
3. Compare nlogba = n
f(n) = n
4.Thus case 2: evenly distributed because
f(n) = 𝜃(n)
T(n) = 𝜃 (nlogba log(n))
= 𝜃 (n1log(n))
= 𝜃 (nlogn) 49

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Exercise: 2

Image that T(n) = 9T(n/3)+n


1. Extract a= 9 b=3 and f(n) =n
2. Determine nlogba = nlog39 = n2
3. Compare: nlogba = n2
f(n) = n
4.Thus case1; (express f(n) in terms of nlogba ) because
f(n)= O(n2-𝜀 )
T(n)= 𝜃 (nlogba) = 𝜃(n2)

50

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You!!

51

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm Design
BSDCH317
Gnanavel R

BITS Pilani Asst. Prof


Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

P,NP Complete, NP Hard problems

2
Complexity Classes
A complexity class is the set of all of the computational problems which
can be solved using a certain amount of a certain computational
resource.
P
The complexity class P is the set of decision problems that can be
solved by a deterministic machine in polynomial time.
This class corresponds to an intuitive idea of the problems which can
be effectively solved in the worst cases.
NP
The complexity class NP is the set of decision problems that can be
solved by a nondeterministic machine in polynomial time.
This class contains many problems that people would like to be able to
solve effectively.
All the problems in this class have the property that their solutions can
be checked effectively.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Deterministic and Non
Deterministic

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Problems
Optimization Problem
Optimization problems are those for which the objective is to maximize or
minimize some values. For example,

• Finding the minimum number of colors needed to color a given graph.

• Finding the shortest path between two vertices in a graph.

Decision Problem
There are many problems for which the answer is a Yes or a No. These
types of problems are known as decision problems. For example,

• Whether a given graph can be colored by only 4-colors.

• Finding Hamiltonian cycle in a graph is not a decision problem, whereas


checking a graph is Hamiltonian or not is a decision problem.
5

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


P Complexity Class

P is the complexity class containing decision problems


which can be solved by a deterministic Turing machine
using a polynomial amount of computation time, or
polynomial time.
P is often taken to be the class of computational problems
which are "efficiently solvable" or "tractable“.
Problems that are solvable in theory, but cannot be solved
in practice, are called intractable.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


P Complexity Class

Definition of P class Problem: - The set of decision-


based problems come into the division of P Problems
who can be solved or produced an output within
polynomial time. P problems being easy to solve
Definition of Polynomial time: - If we produce an output
according to the given input within a specific amount of
time such as within a minute, hours. This is known as
Polynomial time.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NP Complexity class

The class NP consists of those problems that are verifiable


in polynomial time.
NP is the class of decision problems for which it is easy to
check the correctness of a claimed answer, with the aid
of a little extra information.
Every problem in this class can be solved in exponential
time using exhaustive search.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NP
In computational complexity theory, NP ("Nondeterministic Polynomial
time") is the set of decision problems solvable in polynomial time on a
nondeterministic Turing machine.

It is the set of problems that can be "verified" by a deterministic Turing


machine in polynomial time.

All the problems in this class have the property that their solutions can
be checked effectively.
This class contains many problems that people would like to be able to
solve effectively, including
the Boolean satisfiability problem (SAT)
the Hamiltonian path problem (special case of TSP)
the Vertex cover problem.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


P versus NP
Every decision problem that is solvable by a deterministic polynomial time
algorithm is also solvable by a polynomial time non-deterministic
algorithm.

All problems in P can be solved with polynomial time algorithms, whereas


all problems in NP - P are intractable.

It is not known whether P = NP. However, many problems are known in NP


with the property that if they belong to P, then it can be proved that P =
NP.

If P ≠ NP, there are problems in NP that are neither in P nor in NP-


Complete.

The problem belongs to class P if it’s easy to find a solution for the
problem. The problem belongs to NP, if it’s easy to check a solution that
may have been very tedious to find.
10

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NP-Completeness

Definition of NP-Completeness
A language B is NP-complete if it satisfies two conditions
B is in NP
Every A in NP is polynomial time reducible to B.

If a language satisfies the second property, but not


necessarily the first one, the language B is known as
NP-Hard.
Informally, a search problem B is NP-Hard if there exists
some NP-Complete problem A that Turing reduces to B.

11

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NP-Completeness

A problem is in the class NPC if it is in NP and is as hard as


any problem in NP. A problem is NP-hard if all problems
in NP are polynomial time reducible to it, even though it
may not be in NP itself.

NP-hard
If a polynomial time algorithm exists for any of these
problems, all problems in NP would be polynomial time
solvable. These problems are called NP-complete.

12

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


P, NP

13

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Reductions:

The class NP-complete (NPC) problems consist of a set of


decision problems (a subset of class NP) that no one
knows how to solve efficiently.
But if there were a polynomial solution for even a single
NP-complete problem, then every problem in NPC will
be solvable in polynomial time.
For this, we need the concept of reductions.

Suppose there are two problems, A and B. You know that it


is impossible to solve problem A in polynomial time.
You want to prove that B cannot be explained in polynomial
time. We want to show that (A ∉ P) => (B ∉ P)
14

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Polynomial Time Reduction:

We say that Decision Problem L1 is Polynomial time


Reducible to decision Problem L2 (L1≤p L2) if there is a
polynomial time computation function f such that of all x,
xϵL1 if and only if xϵL2.

15

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


References

All algorithms and images reference from the book –


Fundamentals of Computer Algorithm- Reference 2 of
course handout

16

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


P & NP Problems

17

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


P Class Problem

18

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NP Class Problem

19

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NP Class Problem

20

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


P & NP Class Problem

21

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


P & NP Class Problem

22

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Reduction

23

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Reduction

24

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NP Hard Problem

25

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NP Complete Problem

26

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


27

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


28

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


29

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Randomized Algorithms

❖ Randomized algorithms use random numbers or choices to


decide their next step. We use these algorithms to reduce
space and time complexity. There are two types of
randomized algorithms:
1. Las Vegas algorithms
2. Monte-Carlo algorithms

30

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Randomized Algorithms

Las Vegas algorithms


❖ Las Vegas algorithms always return correct results or fail to
give one; however, its runtime may vary. An upper bound can
be defined for its runtime.

❖ The Las Vegas method of randomized algorithms never gives


incorrect outputs, making the time constraint as the random
variable. For example, in string matching algorithms, las vegas
algorithms start from the beginning once they encounter an
error. This increases the probability of correctness. Eg.,
Randomized Quick Sort Algorithm.

31

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


32

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


33

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


34

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


35

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Randomized Algorithms

Monte-Carlo algorithms
❖ The Monte-Carlo algorithms work in a fixed running time;
however, it does not guarantee correct results. One way to
control its runtime is by limiting the number of iterations.

❖ The Monte Carlo method of randomized algorithms focuses


on finishing the execution within the given time constraint.
Therefore, the running time of this method is deterministic.
For example, in string matching, if monte carlo encounters an
error, it restarts the algorithm from the same point. Thus,
saving time. Eg., Karger’s Minimum Cut Algorithm

36

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You!!

37

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm Design
BSDCH317
Gnanavel R
BITS Pilani Asst. Prof
Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

Branch and Bound

2
Branch and Bound

In computer science, there is a large number of


optimization problems which has a finite but extensive
number of feasible solutions.
Branch and bound (B&B) is an algorithm paradigm widely
used for solving such problems.
Branch and bound algorithms are used to find the optimal
solution for combinatory, discrete, and general
mathematical optimization problems.
In general, given an problem, a branch and bound
algorithm explores the entire search space of possible
solutions and provides an optimal solution.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Branch and Bound

A branch and bound algorithm consist of stepwise enumeration of


possible candidate solutions by exploring the entire search space.
With all the possible solutions, we first build a rooted decision tree.
The root node represents the entire search space:

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Branch and Bound

• Each child node is a partial solution and part of the


solution set.
• Before constructing the rooted decision tree, we set an
upper and lower bound for a given problem based on the
optimal solution.
• At each level, we need to make a decision about which
node to include in the solution set.
• At each level, we explore the node with the best bound.

• In this way, we can find the best and optimal solution


fast.
5

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


JOB ASSIGNMENT PROBLEM

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


7

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


JOB ASSIGNMENT PROBLEM
USING B&B

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Problem Statement
Ref: https://www.baeldung.com/cs/branch-and-bound

Let’s first define a job assignment problem.


In a standard version of a job assignment problem, there
can be N jobs and N workers.
To keep it simple, we’re taking 3 jobs and 3 workers in our
example:

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example
Ref: https://www.baeldung.com/cs/branch-and-bound

10

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Pseudocode

11

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Pseudocode
Here, M[][] is the input cost matrix that contains information like the
number of available jobs, a list of available workers, and the
associated cost for each job.
The function MinCost() maintains a list of active nodes.
The function LeastCost() calculates the minimum cost of the active
node at each level of the tree.
After finding the node with minimum cost, we remove the node from the
list of active nodes and return it.

We’re using the Add() function in the pseudocode, which calculates the
cost of a particular node and adds it to the list of active nodes.

In the search space tree, each node contains some information, such
as cost, a total number of jobs, as well as a total number of workers.

12

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Branch and bound
Branch and bound is a systematic method for solving
optimization problems
B&B is a rather general optimization technique that applies
where the greedy method and dynamic programming fail.
However, it is much slower. Indeed, it often leads to exponential
time complexities in the worst case.
On the other hand, if applied carefully, it can lead to algorithms
that run reasonably fast on average.
The general idea of B&B is a BFS-like search for the optimal
solution, but not all nodes get expanded (i.e., their children
generated).
Rather, a carefully selected criterion determines which node to
expand and when, and another criterion tells the algorithm
when an optimal solution has been found.

13

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Job Assignment Problem

Input: n jobs, n employees, and an n x n matrix A where Aij


be the cost if person i performs job j.
Problem: find a one-to-one matching of the n employees to
the n jobs so that the total cost is minimized.
formally, find a permutation f such that C(f), where
C(f)=A1f(1) + A2f(2) + ... + Anf(n) is minimized.

A brute-force method would generate the whole


solution tree, where every path from the root to any
leaf is a solution, then evaluate the C of each
solution, and finally choose the path with the
minimum cost.
14

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Job Assignment Problem
The first idea of B&B is to develop "a predictor" of the likelihood of a node in the solution tree
that it will lead to an optimal solution.
This predictor is quantitative.
With such a predictor, the B&B works as follows:
• Which node to expand next: B&B chooses the live node with the best predictor value
• B&B simply expands that node (i.e., generate all its children)
• the predictor value of each newly generated node is computed, the just expanded node is
now designated as a dead node, and the newly generated nodes are designated as live
nodes.
• Termination criterion: When the best node chosen for expansion turn out to be a final leaf
(i.e., at level n), that when the algorithm terminates, and that node corresponds to the
optimal solution.

What could that predictor be?


In the case of minimization problem, one candidate predictor of any node is the cost so far .
That is, each node corresponds to (partial) solution (from the root to that node). The cost-so-far
predictor is the cost of the partial solution.

(cost so far) + (sum of the minimums of the remaining rows)

15

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


The General Branch and
Bound Algorithm
Each solution is assumed to be expressible as an array
X[1:n] (as was seen in Backtracking).
A predictor, called an approximate cost function CC, is
assumed to have been defined.
Definitions:
A live node is a node that has not been expanded
A dead node is a node that has been expanded
The expanded node (or E-node for short) is the live node
with the best CC value.

16

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


The General Branch and
Bound Algorithm
Procedure B&B()
begin
E: nodepointer;
E := new(node); -- this is the root node which
-- is the dummy start node
H: heap; -- A heap for all the live nodes
-- H is a min-heap for minimization problems,
-- and a max-heap for maximization problems.
while (true) do
if (E is a final leaf) then
-- E is an optimal solution
print out the path from E to the root;
return;
endif
Expand(E);
if (H is empty) then
report that there is no solution;
return;
endif
Procedure Expand(E)
E := delete-top(H); begin
endwhile - Generate all the children
of E;
end
- Compute the approximate cost value CC of each child;
- Insert each child into the heap H; 17
end
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
References

All algorithms and images reference from the book –


Fundamentals of Computer Algorithm- Reference 2 of
course handout

https://towardsdatascience.com/the-branch-and-bound-
algorithm-a7ae4d227a69

18

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Knapsack Problem

❖ Given n items of known weights wi and values vi , i = 1,


2, . . . , n, and a knapsack of capacity W, find the most
valuable subset of the items that fit in the knapsack.
❖ It is convenient to order the items of a given instance in
descending order by their value-to-weight ratios.
❖ A simple way to compute the upper bound ub is to add to
v, the total value of the items already selected, the
product of the remaining capacity of the knapsack W − w
and the best per unit payoff among the remaining items,
which is vi+1/wi+1:

19

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Knapsack Problem

ITEM WEIGHT VALUE


1 4 $40
2 7 $42
3 5 $25
4 3 $12

20

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Knapsack Problem

21

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


22

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Knapsack Problem

23

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Knapsack Problem

24

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You!!

25

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm Design
BSDCH317
Gnanavel R

BITS Pilani Asst. Prof


Pilani|Dubai|Goa|Hyderabad

1
Sum of Subsets
Subset sum problem is the problem of finding a subset such that the sum
of elements equal a given number.

A subset A of n positive integers and a value sum is given, find whether or


not there exists any subset of the given set, the sum of whose elements
is equal to the given value of sum.

Example:

Given the following set of positive numbers:

{ 2, 9, 10, 1, 99, 3}

We need to find if there is a subset for a given sum say 4:

{ 1, 3 }
2

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Sum of Subset

One way to find subsets that sum to K is to consider all


possible subsets.

A superset contains all those subsets generated from a


given set.

The size of such a power set is 2N.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Sum of Subset

• Start with an empty set


• Add the next element from the list to the set
• If the subset is having sum M, then stop with that subset as
solution.
• If the subset is not feasible or if we have reached the end of
the set, then backtrack through the subset until we find the
most suitable value.
• If the subset is feasible (sum of sebset < M) then go to step 2.
• If we have visited all the elements without finding a suitable
subset and if no backtracking is possible then stop without
solution.
4

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Sum of Subset

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example

Input:
This algorithm takes a set of numbers, and a sum value.
The Set: {10, 7, 5, 18, 12, 20, 15}
The sum Value: 35
Output:
All possible subsets of the given set, where sum of each element
for every subsets is same as the given sum value.
{10, 7, 18}
{10, 5, 20}
{5, 18, 12}
{20, 15}

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Problem statement:

Let, S = {S1 …. Sn} be a set of n positive integers, then we have to find a subset whose sum is equal to given
positive integer d.It is always convenient to sort the set’s elements in ascending order. That is, S1 ≤ S2
≤…. ≤ Sn

Algorithm:

Let, S is a set of elements and m is the expected sum of subsets. Then:

Start with an empty set.

Add to the subset, the next element from the list.

If the subset is having sum m then stop with that subset as solution.

If the subset is not feasible or if we have reached the end of the set then backtrack through the subset until
we find the most suitable value.

If the subset is feasible then repeat step 2.

If we have visited all the elements without finding a suitable subset and if no backtracking is possible then
stop without solution.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example

Example: Solve following problem and draw portion of state space tree M=30,W ={5, 10, 12, 13, 15, 18}
Solution:

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Bounding Function

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm

10

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


11

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph coloring

Let G be a graph and m be a given positiveinteger.We want


to discover whether the nodes of G can be colored in
such a way that no two adjacent nodes have the same
color yet only m colors are used.This is termed the
m-colorability decision problem

Given an undirected graph and a number m, determine if


the graph can be coloured with at most m colours such
that no two adjacent vertices of the graph are colored
with the same color.

12

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph coloring

The smallest number of colors needed to color a graph G is


called its chromatic number.
For example, the following undirected graph can be colored
using minimum of 2 colors.
Hence the chromatic number of the graph is 2.

13

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph Coloring

Input:
A graph represented in 2D array format of size V * V where V is
the number of vertices in graph and the 2D array is the
adjacency matrix representation and value graph[i][j] is 1 if
there is a direct edge from i to j, otherwise the value is 0.
An integer m that denotes the maximum number of colors which
can be used in graph coloring
Output:
Return array color of size V that has numbers from 1 to m. Note
that color[i] represents the color assigned to the ith vertex.
Return false if the graph cannot be colored with m colors.

14

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph Coloring

Solution:
Naive Approach:
The brute force approach would be to generate all possible
combinations (or configurations) of colors.
After generating a configuration, check if the adjacent
vertices have the same colour or not. If the conditions
are met, add the combination to the result and break the
loop.
Since each node can be colored by using any of the m
colors, the total number of possible color configurations
are mV. The complexity is exponential which is very
huge.

15

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph coloring

Using Backtracking:
By using the backtracking method, the main idea is to
assign colors one by one to different vertices right from
the first vertex (vertex 0).
Before color assignment, check if the adjacent vertices
have same or different color by considering already
assigned colors to the adjacent vertices.
If the color assignment does not violate any constraints,
then we mark that color as part of the result. If color
assignment is not possible then backtrack and return
false.

16

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


17

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


18

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


19

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


20

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


21

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph coloring

22

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph coloring

23

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph coloring

24

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph coloring

25

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Graph coloring

26

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Hamiltonian cycle

Hamiltonian Path in an undirected graph is a path that visits


each vertex exactly once. A Hamiltonian cycle (or
Hamiltonian circuit) is a Hamiltonian Path such that there
is an edge (in the graph) from the last vertex to the first
vertex of the Hamiltonian Path.

27

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Hamiltonian cycle

Naive Algorithm
Generate all possible configurations of vertices and print a
configuration that satisfies the given constraints. There
will be n! (n factorial) configurations.

28

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


29

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


30

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Hamiltonian cycle

31

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Hamiltonian cycle

32

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Hamiltonian cycle

33

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Backtrack

34

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


References

All algorithms and images reference from the book –


Fundamentals of Computer Algorithm- Reference 2 of
course handout

35

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You!!

36

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm Design
BSDCH317
Gnanavel R

BITS Pilani Asst. Prof


Pilani|Dubai|Goa|Hyderabad

1
Backtracking

Backtracking can be defined as a general algorithmic


technique that considers searching every possible
combination in order to solve a computational problem.
It is known for solving problems recursively one step at a time and removing those
solutions that that do not satisfy the problem constraints at any point of time.

It is a brute force approach that tries out all the possible solutions.

The term backtracking implies - if the current solution is not suitable, then eliminate
that and backtrack (go back) and check for other solutions.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Backtracking

3
Image Source Ref: InterviewBit.com
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Backtracking

Backtracking can understand of as searching a tree for a particular "goal" leaf node.

To "explore" node N:
1. If N is a goal node, return "success"
2. If N is a leaf node, return "failure"
3. For each child C of N,
Explore C
If C was successful, return "success"
4. Return "failure“

Backtracking algorithm determines the solution by systematically searching the


solution space for the given problem.
Backtracking is a depth-first search with any bounding function.

All solution using backtracking is needed to satisfy a complex set of constraints.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Backtracking

The Algorithm begins to build up a solution, starting with an


empty solution set S . S = {}

In recursion, the function calls itself until it reaches a base


case. In backtracking, we use recursion to explore all the
possibilities until we get the best result for the problem.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Backtracking
The solution is based on finding one or more vectors that maximize,
minimize, or satisfy a criterion function P(x1, …. xn).
Form a solution at any point seems not promising, ignore it.
All possible solutions require a set of constraints divided into two
categories:
1. Explicit Constraint: Explicit constraints are rules that restrict each xi
to take on values only
from a given set. Ex: xn= 0 or 1.
2. Implicit Constraint: Implicit Constraints are rules that determine
which of the tuples in the
solutions space of I satisfy the criterion function.
Back tracking is a modified depth first search tree.
Backtracking is a procedure whereby, after determining that a node can
lead to nothing but dead end, we go back (backtrack) to the nodes
parent and proceed with the search on the next child.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Problems in backtracking

There are three types of problems in backtracking

Decision Problem -a feasible solution.


Optimization Problem –best solution.
Enumeration Problem –all feasible solutions.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


N-Queens Problem:

This is generalization problem. If we take n=8 then the


problem is called as 8 queens problem. If we take
n=4then the problem is called 4 queens problem. A
classic combinational problem is to place n queens on a
n*n chess board so that no two attack, i.,e no two
queens are on the same row, column or diagonal.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


9

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


10

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


11

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


4-Queens problem:

Consider a 4*4 chessboard. Let there are 4 queens.


The objective is place there 4 queens on 4*4 chessboard in
such a way that no two queens should be placed in the
same row, same column or diagonal position.
The explicit constraints are 4 queens are to be placed on
4*4 chessboards in 44 ways.
The implicit constraints are no two queens are in the same
row column or diagonal.
Let{x1, x2, x3, x4} be the solution vector where x1 column
on which the queen i is placed.

12

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


4-Queens problem:

First queen is placed in first row and first column

The second queen should not be in first row and second column. It should be placed
in second row and in second, third or fourth column. It we place in second column,
both will be in same diagonal, so place it in third column.

Q Q

- - - - Q2
- - - -

13

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


4-Queens problem:

We are unable to place queen 3 in third row, so go back to


queen 2 and place it somewhere else.

Q1 Q1
Q2 - - Q2
- Q3 - -

14

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


4-Queens problem:

Now the fourth queen should be placed in 4th row and 3rd
column but there will be a diagonal attack from queen 3.
So go back, remove queen 3 and place it in the next
column. But it is not possible, so move back to queen 2
and remove it to next column but it is not possible. So go
back to queen 1 and move it to next column.

Q1 1
- - Q2 - - Q2
- Q3 - - 3 - -
4

15

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


8*8 queens Problem
Algorithm
Step 1: Start

Step 2: Given n queens, read n from user and let us denote the queen number by k. k=1,2,..,n.

Step 3: We start a loop for checking if the k<sup>th</sup> queen can be placed in the
respective column of the k<sup>th</sup> row.

Step 4: For checking that whether the queen can be placed or not, we check if the previous
queens are not in diagonal or in same row with it.

Step 5: If the queen cannot be placed backtracking is done to the previous queens until a
feasible solution is not found.

Step 6: Repeat the steps 3-5 until all the queens are placed.

Step 7: The column numbers of the queens are stored in an array and printed as a n-tuple
solution

Step 8: Stop

16

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


NQueens Algorithm

17

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You!!

18

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm Design
Gnanavel R

BITS Pilani Asst. Prof


Pilani|Dubai|Goa|Hyderabad

1
Single-source shortest paths
• Two classic algorithms to solve single-source shortest path
problem
– Bellman-Ford algorithm
• A dynamic programming algorithm
• Works when some weights are negative
– Dijkstra’s algorithm
• A greedy algorithm
• Faster than Bellman-Ford
• Works when weights are all non-negative
Bellman-Ford algorithm
Observation:
• If there is a negative cycle, there is no solution
– Add this cycle again can always produces a less
weight path
• If there is no negative cycle, a shortest path has at most |V|-1 edges

Idea:
• Solve it using dynamic programming
• For all the paths have at most 0 edge, find all the shortest paths
• For all the paths have at most 1 edge, find all the shortest paths
• …
• For all the paths have at most |V|-1 edge, find all the shortest paths
Bellman-Ford algorithm
Bellman-Ford(G, s)
//Initialize 0-edge shortest paths
for each v in G.V{
if(v==s) 𝑑𝑠,𝑣 =0; else 𝑑𝑠,𝑣 = ∞; //set the 0-edge shortest distance
from s to v
𝜋𝑠,𝑣 = NIL; //set the predecessor of v on the shortest path
}
//bottom-up construct 0-to-(|V|-1)-edges shortest paths
Repeat |G.V|-1 times {
for each edge (u, v) in G.E{
if(𝑑𝑠,𝑣 > 𝑑𝑠,𝑢 + 𝑤(𝑢,𝑣) ){
𝑑𝑠,𝑣 = 𝑑𝑠,𝑢 + 𝑤(𝑢,𝑣) ;
𝜋𝑠,𝑣 = 𝑢;
}
//test negative cycle
}
for each edge (u, v) in G.E{
If (𝑑𝑠,𝑣 > 𝑑𝑠,𝑢 + 𝑤(𝑢,𝑣) ) return false; // there is no solution
}
return true;
T(n)=O(VE)=O(𝑉 3 )
Page 5
Page 6
Bellman-Ford algorithm
e.g.
20
0 ∞ ∞
10 1
1 2 3

What is the 0-edge shortest path from 1 to 1?


<> with path weight 0
What is the 0-edge shortest path from 1 to 2?
<> with path weight ∞

What is the 0-edge shortest path from 1 to 3?


<> with path weight ∞
Bellman-Ford algorithm
e.g. ∞ > 0 + 20
20 𝑑1,3 = 20
0 ∞ /10 ∞ /20
10 1 ∞ > 0 + 10
1 2 3
𝑑1,2 = 10

What is the at most 1-edge shortest path from 1 to 1? ∞ = ∞+1

<> with path weight 0 𝑑1,3 𝑢𝑛𝑐ℎ𝑎𝑛𝑔𝑒𝑑


What is the at most 1-edge shortest path from 1 to 2?
<1, 2> with path weight 10

What is the at most 1-edge shortest path from 1 to 3?


<1, 3> with path weight 20

In Bellman-Ford, they are calculated by scan all edges once


Bellman-Ford algorithm
e.g. 0 + 20 = 20
20 𝑑1,3 𝑢𝑛𝑐ℎ𝑎𝑛𝑔𝑒𝑑
0 10 /10 20/11
10 1 10 = 0 + 10
1 2 3
𝑑1,2 𝑢𝑛𝑐ℎ𝑎𝑛𝑔𝑒𝑑

What is the at most 2-edges shortest path from 1 to 1? 20 > 10+1

<> with path weight 0 𝑑1,3 = 11


What is the at most 2-edges shortest path from 1 to 2?
<1, 2> with path weight 10

What is the at most 2-edges shortest path from 1 to 3?


<1, 2, 3> with path weight 11

In Bellman-Ford, they are calculated by scan all edges once


Bellman-Ford algorithm
All 0 edge shortest paths

5
∞ ∞
-2
2 3
6
-3
0
8
1 7
-4
7 2
4 5
∞ ∞
9
Bellman-Ford algorithm
Calculate all at most 1 edge
shortest paths 5
∞ /6 ∞ /∞
-2
2 3
6
-3
0 /0
8
1 7
-4
7 2
4 5
∞ /∞
/7 ∞ /∞
9
Bellman-Ford algorithm
Calculate all at most 2 edges
shortest paths 5
6 /6 ∞ /11
/4
-2
2 3
6
-3
0 /0
8
1 7
-4
7 2
4 5
7 /7 ∞ /2
9
Bellman-Ford algorithm
Calculate all at most 3 edges
shortest paths 5
/2
6 /6 4 /4
-2
2 3
6
-3
0 /0
8
1 7
-4
7 2
4 5
7 /7 2 /2
9
Bellman-Ford algorithm
Calculate all at most 4 edges
shortest paths 5
2 /2 4 /4
-2
2 3
6
-3
0 /0
8
1 7
-4
7 2
4 5
7 /7 2 /-2
9
Bellman-Ford algorithm
Final result:
5
2 4
-2
2 3
6
-3
0
8
1 7
-4
7 2
4 5
7 -2
9
What is the shortest path 1, 4, 3, 2, 5
from 1 to 5?
What is the shortest path
What is weight of this path? -2 from 1 to 2, 3, and 4?
Transitive closure:

Given a digraph G, the transitive closure of G is the digraph G*


❑ G* has the same vertices as G
❑ If G has a directed graph from u to v (u ≠ v), G* has a directed edges from u to v.

The transitive closure provides reachability information about a digraph.

16

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Transitive closure (Example)

17

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Computing the transitive closure:

• We can perform DFS starting at each vertex to see


which vertices w are reachable from v, adding an edge
(v,w) to the transitive closure for each such W.
• O(n(n+m))
• If there’s a way to get from A to B and from B to C, then there’s a way to get from
A to C.

• Use dynamic programming: the Floyd-Warshall algorithm

18

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Floyd-Warshall Transitive
Closure
Idea #1: Number the vertices 1, 2, …, n.
Idea #2: Consider paths that use only vertices
numbered 1, 2, …, k, as intermediate vertices:

Uses only vertices numbered 1,…,k


(add this edge if it’s not already in)
i

Uses only vertices j


numbered 1,…,k-1
Uses only vertices
k numbered 1,…,k-1
19
Directed Graphs
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Floyd-warshall Transitive
closure:
• Constructs the transitive closure of a given directed
graph.

R0 ,R1 ,R2 ,R3 ……… R(n-1),Rn Matrices of order n*n.


‘n’- number of vertices in digraph.

Rn – Transitive closure.
Vertices are numbered from 1 to ‘n’.

20

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


The formula to find each matrix in
the serious

Rk(i,j) = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j])

21

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example

1 2
1 2 3 4
1 0 1 0 0
R0 = 2 0 0 1 0
3 1 0 0 1
4 3 4 0 0 0 0

22

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Now find R1 Matrix
R1(i,j) = R(0)[i,j] or (R(0)[i,1] and R(0)[1,j])

1 2 3 4
1 0 1 0 0
R(1) = 2 0 0 1 0
3 1 1 0 1
4 0 0 0 0

23

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Now find R2 Matrix

1 2 3 4
R(2) = 1 0 1 1 0
2 0 0 1 0
3 1 1 1 1
4 0 0 0 0

24

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Now find R3 Matrix

1 2 3 4
R(3) = 1 1 1 1 1
2 1 1 1 1
3 1 1 1 1
4 0 0 0 0

25

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Now find R4 Matrix

1 2 3 4
R(4) = 1 1 1 1 1
2 1 1 1 1
3 1 1 1 1
4 0 0 0 0

26

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Floyd-warshall’s Algorithm:
Algorithm: warshall (A[1..n,1…n])
//implements warshall’s algorithm for computing the transitive closure.
// input: the adjacency matrix A of a digraph with n-vertices
//output: the transitive closure of the digraph.

R(0) ← A
for K ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
R(k)[i,j] ← R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j]
return R(n)

Now its your turn to find the time complexity:

27

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


DAGs and Topological Ordering
A directed acyclic graph (DAG) is a D E
digraph that has no directed
cycles
B
A topological ordering of a digraph is
a numbering
C
v1 , …, vn
of the vertices such that for every A DAG G
edge (vi , vj), we have i  j
Example: in a task scheduling v4 v5
digraph, a topological ordering a
task sequence that satisfies the D E
precedence constraints v2
Theorem B
A digraph admits a topological v3
ordering if and only if it is a DAG v1 C
Topological ordering
A of G
28
Directed Graphs
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Topological Sorting
Number vertices, so that (u,v) in E implies u < v
1 A typical student day
wake up
2 3
eat
study computer sci.

4 5
nap more c.s.
7
play
8
write c.s. program 6
9 work out
make cookies
for professors
10
sleep 11
dream about graphs 29
Directed Graphs
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Topological Sort Algorithm

1.Store each vertex’s InDegree (# of incoming


edges) in an array
2. While there are vertices
remaining:
➭ Find a vertex with
In-Degree zero insert into and
output it
➭ Reduce In-Degree of
all vertices adjacent
to it by 1
➭ Mark this vertex (InDegree = -1)
30

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Topological Sorting Example

31
Directed Graphs
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Topological Sorting Example
1
4

2
5

3 6

7
8

9
32
Directed Graphs
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
33

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


34

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


35

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Longest Common
Subsequence
X = ACADB
Y = CBDA

36

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Longest Common
Subsequence

37

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


38

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


39

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Applications

Inheritance between classes


Prerequisites between course of a degree
Scheduling

40

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

41

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

Problem statement:
• You are given a rod of length n and you need to cut the
cod in such a way that you need to sell It for maximum
profit. You are also given a price table where it gives,
what a piece of rod is worth.

42

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

43

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

44

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

45

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

46

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

47

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

48

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


ROD CUTTING PROBLEM

49

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


PRACTICE PROBLEM

50

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Thank You!!

51

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm Design
Gnanavel R
Assistant Professor
BITS-PILAN
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

Basic Analysis of Algorithm


All the programs in this file are selected from the book
Fundamentals of Computer Algorithms by Ellis Horowitz, Sartaj Sahni
2
What is an algorithm?
❖An algorithm is a sequence of unambiguous
instructions for solving a problem, i.e., for
obtaining a required output for any legitimate
input in a finite amount of time.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example
Problem:
Find gcd(m,n) the greatest common divisor of two
nonnegative, not both zero integers m and n.

Solutions:
1. Euclid’s algorithm
2. Consecutive integer checking algorithm
3. Middle-school procedure

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


1. Euclid’s algorithm
❖ Euclid’s algorithm is based on repeated application of
equality
gcd(m,n) = gcd(n, m mod n)
❖ until the second number becomes 0, which makes the
problem trivial.
gcd(m,0)=m
❖ Example: m=60 and n=24
gcd(60,24) = gcd(24,60%24) = gcd(24,12)
gcd(24,12) = gcd(12,24%12) = gcd(12,0) = 12
5

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Euclid’s algorithm for
computing gcd(m, n)
Step 1 : If n = 0, return the value of m as the
answer and stop; otherwise, proceed to Step 2.

Step 2 : Divide m by n and assign the value of the


remainder to r.

Step 3 : Assign the value of n to m and the value of


r to n. Go to Step 1.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Euclid’s algorithm for
computing gcd(m, n)
ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n != 0 do
r ← m mod n
m←n
n←r
return m
7

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


2. Consecutive integer checking
algorithm for computing gcd(m, n)
Step 1 : Assign the value of min{m, n} to t.
Step 2 : Divide m by t. If the remainder of this
division is 0, go to Step 3; otherwise, go to Step 4.
Step 3 : Divide n by t. If the remainder of this
division is 0, return the value of t as the answer
and stop; otherwise, proceed to Step 4.
Step 4 : Decrease the value of t by 1. Go to Step
2.
8

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


3. Middle-school procedure for
computing gcd(m, n)
Step 1 : Find the prime factors of m.
Step 2 : Find the prime factors of n.
Step 3 : Identify all the common factors in the two
prime expansions found in Step 1 and Step 2. (If p is a
common factor occurring pm and pn times in m and n,
respectively, it should be repeated min{pm, pn} times.)
Step 4 : Compute the product of all the common
factors and return it as the greatest common divisor of
the numbers given.
9

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


3. Middle-school procedure for
computing gcd(m, n)
Thus, for the numbers 60 and 24, we get
60 = 2 . 2 . 3 . 5
24 = 2 . 2 . 2 . 3
gcd(60, 24) = 2 . 2 . 3 = 12

10

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Fundamentals of Algorithmic
Problem Solving

11

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


How to create programs
Requirements
Analysis: bottom-up vs. top-down
Design: data objects and operations
Refinement and Coding
Verification
– Program Proving
– Testing
– Debugging

12

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Algorithm
Definition
An algorithm is a finite set of instructions that
accomplishes a particular task.
Criteria
– input
– output
– definiteness: clear and unambiguous
– finiteness: terminate after a finite number of steps
– effectiveness: instruction is basic enough to be
carried out
13

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Data Type
Data Type
A data type is a collection of objects and a set of
operations that act on those objects.
Abstract Data Type
An abstract data type(ADT) is a data type that is
organized in such a way that the specification of
the objects and the operations on the objects is
separated from the representation of the objects
and the implementation of the operations.

14

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Specification vs. Implementation
Operation specification
– function name
– the types of arguments
– the type of the results
Implementation independent

15
CHAPTER 1
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Measurements
Performance Analysis (machine independent)
– space complexity: storage requirement
– time complexity: computing time
Performance Measurement (machine dependent)

16

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Mathematical Analysis for
Non-Recursive Algorithms
1. Decide on problem size specification
2. Identify basic operation
3. Check worst, best, average case
4. Count: set up the sum
5. Evaluate or determine order of sum

17

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Finding the value of the largest
element in a list of n numbers.

18

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Finding the value of the largest
element in a list of n numbers.

19

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Formulas

20

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


check whether all the elements in a
given array of n elements are distinct.

21

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


check whether all the elements in a
given array of n elements are distinct.

22

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Matrix Multiplication

23

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Matrix Multiplication

24

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Space Complexity
S(P)=C+SP(I)
Fixed Space Requirements (C)
Independent of the characteristics of the inputs and
outputs
– instruction space
– space for simple variables, fixed-size structured variable,
constants
Variable Space Requirements (SP(I))
depend on the instance characteristic I
– number, size, values of inputs and outputs associated with
I
– recursive stack space, formal parameters, local variables,
return address

25
CHAPTER 1
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Examples

*Program 1.9: Simple arithmetic function (p.19)


float abc(float a, float b, float c) Sabc(I) = 0
{
return a + b + b * c + (a + b - c) / (a + b) + 4.00;
}

float sum(float list[ ], int n)


{
float tempsum = 0;
int i;
for (i = 0; i<n; i++) Ssum(n) = n+3
tempsum += list [i];
return tempsum;
} 26
CHAPTER 1
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Examples
*Program 1.11: Recursive function for summing a list of numbers (p.20)
float rsum(float list[ ], int n)
{ Ssum(I)=Ssum(n)=6n
if (n) return rsum(list, n-1) + list[n-1];
return 0;
}
Assumptions:
Type Name Number of bytes
parameter: float list [ ] 2
parameter: integer n 2
return address:(used internally) 2(unless a far address)
TOTAL per recursive call 6

27
CHAPTER 1
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Time Complexity
T(P)=C+TP(I)
Compile time (C) TP(n)=caADD(n)+csSUB(n)+clLDA(n)+cstSTA(n)
independent of instance characteristics
run (execution) time TP
Definition
A program step is a syntactically or semantically
meaningful program segment whose execution time
is independent of the instance characteristics.
Example
– abc = a + b + b * c + (a + b - c) / (a + b) + 4.0
– abc = a + b + c
Regard as the same unit
machine independent
28

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Methods to compute the step count
Introduce variable count into programs
Tabular method
– Determine the total number of steps contributed by
each statement
step per execution  frequency
– add up the contribution of all statements

29
CHAPTER 1
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Iterative summing of a list of numbers

*Program 1.12: Program 1.10 with count statements (p.23)

float sum(float list[ ], int n)


{
float tempsum = 0; count++; /* for assignment */
int i;
for (i = 0; i < n; i++) {
count++; /*for the for loop */
tempsum += list[i]; count++; /* for assignment */
}
count++; /* last execution of for */
return tempsum;
count++; /* for return */
}
2n + 3 steps
30
CHAPTER 1
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Tabular Method
*Figure 1.2: Step count table for Program 1.10 (p.26)
Iterative function to sum a list of numbers
steps/execution
Statement s/e Frequency Total steps
float sum(float list[ ], int n) 0 0 0
{ 0 0 0
float tempsum = 0; 1 1 1
int i; 0 0 0
for(i=0; i <n; i++) 1 n+1 n+1
tempsum += list[i]; 1 n n
return tempsum; 1 1 1
} 0 0 0
Total 2n+3

31
Recursive Function to sum of a list of numbers
*Figure 1.3: Step count table for recursive summing function (p.27)

Statement s/e Frequency Total steps


float rsum(float list[ ], int n) 0 0 0
{ 0 0 0
if (n) 1 n+1 n+1
return rsum(list, n-1)+list[n-1]; 1 n n
return list[0]; 1 1 1
} 0 0 0
Total 2n+2

32
Matrix Addition
*Figure 1.4: Step count table for matrix addition (p.27)

Statement s/e Frequency Total steps

Void add (int a[ ][MAX_SIZE]‧‧‧) 0 0 0


{ 0 0 0
int i, j; 0 0 0
for (i = 0; i < row; i++) 1 rows+1 rows+1
for (j=0; j< cols; j++) 1 rows‧(cols+1) rows‧cols+rows
c[i][j] = a[i][j] + b[i][j]; 1 rows‧cols rows‧cols
} 0 0 0

Total 2rows‧cols+2rows+1

33
Why study algorithms and
performance?
•Algorithms help us to understand scalability.

Performance often draws the line between what is feasible and what is
impossible.

Algorithmic mathematics provides a language for talking about program


behavior.

Performance is the currency of computing.

The lessons of program performance generalize to other computing


resources.

Speed is fun!

34

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Analytical model to analyze
algorithm
Algorithm should be analyzed by using general
methodology.
This approach uses:
– High level description of the algorithm.
– Takes into account all possible inputs.
– Allows one to evaluate the efficiency of any algorithm in a way that is
independent of the hardware and the software environment.

35

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Techniques to analyse the
Algorithm
We need techniques that enable us to compare algorithms
without implementing them.
Our two most important tools are
(1) the RAM model of computation and
(2) asymptotic analysis of worst-case complexity.

36

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


The Random Access Machine
(RAM) Model (*)

➢ A CPU

➢ A potentially unbounded bank of memory


cells, each of which can hold an arbitrary
number or character 2
1
0

✓ Memory cells are numbered and accessing any cell in memory


takes unit time.

37

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Primitive Operations

• Basic computations performed by an


algorithm
• Identifiable in pseudocode
Examples:
• Largely independent from the
programming language – Assigning a value to a
variable
• Exact definition not important (we will see
– Indexing into an array
why later)
– Calling a method
• Assumed to take a constant amount of
– Returning from a
time in the RAM model method

38

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


RAM

Under the RAM model, we measure the run time of an


algorithm by counting up the number of steps it takes on
a given problem instance.
By assuming that our RAM executes a given number of
steps per second, the operation count converts easily to
the actual run time.

The RAM is a simple model of how computers perform.

39

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


RAM MODEL

Each ``simple'' operation (+, *, -, =, if, call) takes exactly 1


time step.
Loops and subroutines are not considered simple
operations. Instead, they are the composition of many
single-step operations.
The time it takes to run through a loop or execute a
subprogram depends upon the number of loop iterations.
Each memory access takes exactly one time step, and we
have as much memory as we need.
The RAM model takes no notice of whether an item is in
cache or on the disk, which simplifies the analysis.
40

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Counting Primitive Operations

➢ By inspecting the pseudocode, we can determine the


maximum number of primitive operations executed by an
algorithm, as a function of the input size.

Algorithm arrayMax(A, n) # operations


currentMax  A[0] 2
for (i =1; i<n; i++) 2n
2(n-1)
(i=1 once, i<n n times, i++ (n-1) times)
if A[i]  currentMax then 2(n − 1)
currentMax  A[i] 2(n − 1)
return currentMax 1
Total 8n - 3 41

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Asymptotic Notations

Asymptotic notation is a formal way or formal notation to


speak about functions and classify them

asymptotic notation
O(n)
Ω(n)
Ө(n)

42

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Big-Oh Notation
Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive
constants c and n0 such that
f(n)  cg(n) for n  n0

For function g(n), we define O(g(n)), big-O


of n, as the set:
O(g(n)) = {f(n) :  positive constants c and n0,
such that 0  f(n)  cg(n) n  n0 }

Example
O(n2) = {2n2 + 10n + 2000, 125n2 – 233n - 250, 10n + 25, 150, …..}
43

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Big Omega ((n) )
big-Omega
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n 0  1 such
that f(n)  c•g(n) for n  n0

For function g(n), we define (g(n)),


big-Omega of n, as the set:
(g(n)) = {f(n) :  positive constants c and n0,
such that 0  cg(n)  f(n) n  n0} 44

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Big-Theta ((n))
big-Theta
◼ f(n) is (g(n)) if there are constants c’ > 0 and c’’ > 0 and an integer

constant n0  1 such that c’•g(n)  f(n)  c’’•g(n) for n  n0

If fO(g) and gO(f), then we say


“g and f are of the same order” or
“f is (exactly) order g” and
write f(g).

For function g(n), we define (g(n)),


big-Theta of n, as the set:

(g(n)) = {f(n) :  positive constants c1, c2, and n0,


such that n  n0,we have 0  c1g(n)  f(n)  c2g(n)} 45

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


The problem of sorting

Input: sequence a1, a2, …, an of numbers.

Output: permutation a'1, a'2, …, a'n such


that a'1  a'2  …  a'n .

Example:

Input: 8 2 4 9 3 6

Output: 2 3 4 6 8 9

L1.46

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Insertion sort
INSERTION-SORT (A, n) ⊳ A[1 . . n]
for j ← 2 to n
do key ← A[ j]
“pseudocode” i←j–1
while i > 0 and A[i] > key
do A[i+1] ← A[i]
i←i–1
A[i+1] = key
1 i j n
A:

key
sorted
L1.47

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

L1.48

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

L1.49

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

L1.50

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

L1.51

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

2 4 8 9 3 6

L1.52

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

2 4 8 9 3 6

L1.53

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

2 4 8 9 3 6

2 4 8 9 3 6

L1.54

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

2 4 8 9 3 6

2 4 8 9 3 6

L1.55

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

2 4 8 9 3 6

2 4 8 9 3 6

2 3 4 8 9 6

L1.56

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

2 4 8 9 3 6

2 4 8 9 3 6

2 3 4 8 9 6

L1.57

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Example of insertion sort
8 2 4 9 3 6

2 8 4 9 3 6

2 4 8 9 3 6

2 4 8 9 3 6

2 3 4 8 9 6

2 3 4 6 8 9 done

L1.58

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Running time
• The running time depends on the input: an
already sorted sequence is easier to sort.
• Major Simplifying Convention:
Parameterize the running time by the size of
the input, since short sequences are easier to
sort than long ones.
➢TA(n) = time of A on length n inputs
• Generally, we seek upper bounds on the
running time, to have a guarantee of
performance.
L1.59

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Machine-independent time
What is insertion sort’s worst-case time?

BIG IDEAS:

• Ignore machine dependent constants,


otherwise impossible to verify and to compare algorithms

• Look at growth of T(n) as n → ∞ .

“Asymptotic Analysis”

L1.60

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


-notation

DEF:

(g(n)) = { f (n) : there exist positive constants c1, c2, and


n0 such that 0  c1 g(n)  f (n)  c2 g(n)
for all n  n0 }
Basic manipulations:

• Drop low-order terms; ignore leading constants.


• Example: 3n3 + 90n2 – 5n + 6046 = (n3)

L1.61

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Asymptotic performance
When n gets large enough, a (n2) algorithm always beats a (n3) algorithm.
.
• Asymptotic analysis is a
useful tool to help to
structure our thinking
toward better algorithm
• We shouldn’t ignore
asymptotically slower
T(n) algorithms, however.
• Real-world design
situations often call for a
careful balancing
n n0 L1.62

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Asymptotic Notation (O)
• Definition
f(n) = O(g(n)) iff there exist positive constants c and n0 such that f(n)  cg(n) for all n,
n  n0 .
• Examples
– 3n+2=O(n) /* 3n+24n for n2 */
– 3n+3=O(n) /* 3n+34n for n3 */
– 100n+6=O(n) /* 100n+6101n for n10 */
– 10n 2+4n+2=O(n2) /* 10n 2+4n+211n2 for n5 */
– 6*2 n+n2=O(2n) /* 6*2n+n2 7*2 n for n4 */

CHAPTER 1 63
Example

• Complexity of c1n2+c2n and c3n


– for sufficiently large of value, c3n is faster than
c1n2+c2n
– for small values of n, either could be faster
• c1=1, c2=2, c3=100 --> c1n2+c2n  c3n for n  98
• c1=1, c2=2, c3=1000 --> c1n2+c2n  c3n for n  998
– break even point
• no matter what the values of c1, c2, and c3, the n
beyond which c3n is always faster than c1n2+c2n

CHAPTER 1 64
• O(1): constant
• O(n): linear
• O(n2): quadratic
• O(n3): cubic
• O(2n): exponential
• O(logn)
• O(nlogn)

CHAPTER 1 65
*Figure 1.7:Function values (p.38)

CHAPTER 1 66
*Figure 1.8:Plot of function values(p.39)

nlogn

logn

CHAPTER 1 67
*Figure 1.9:Times on a 1 billion instruction per second computer(p.40)

CHAPTER 1 68

You might also like