Design and Analysis of Algorithms (DAA) Notes
Design and Analysis of Algorithms (DAA) Notes
INTRODUCTION
1.1 Notion of Algorithm
1.2 Review of Asymptotic Notation
1.3 Mathematical Analysis of Non-Recursive and Recursive Algorithms
1.4 Brute Force Approaches: Introduction
1.5 Selection Sort and Bubble Sort
1.6 Sequential Search and Brute Force String Matching.
1
An algorithm is composed of a finite set of steps, each of which may require one or more op-
erations. The possibility of a computer carrying out these operations necessitates that certain
constraints be placed on the type of operations an algorithm can include. The fourth criterion
for algorithms we assume in this book is that they terminate after a finite number of opera-
tions.
Criterion 5 requires that each operation be effective; each step must be such that it can, at least
in principal, be done by a person using pencil and paper in a finite amount of time. Performing
arithmetic on integers is an example of effective operation, but arithmetic with real numbers is
not, since some values may be expressible only by infinitely long decimal expansion. Adding
two such numbers would violet the effectiveness property.
• Algorithms that are definite and effective are also called computational procedures.
• The same algorithm can be represented in same algorithm can be represented in several ways
• Several algorithms to solve the same problem
• Different ideas different speed
Example:
Problem:GCD of Two numbers m,n
Input specifiastion :Two inputs,nonnegative,not both zero
Euclids algorithm
-gcd(m,n)=gcd(n,m mod n)
Untill m mod n =0,since gcd(m,0) =m
Another way of representation of the same algorithm
Euclids algorithm
2
Step1:if n=0 return val of m & stop else proceed step 2
Step 2:Divide m by n & assign the value of remainder to r
Step 3:Assign the value of n to m,r to n,Go to step1.
Another algorithm to solve the same problem
Euclids algorithm
Step1:Assign the value of min(m,n) to t
Step 2:Divide m by t.if remainder is 0,go to step3 else goto step4
Step 3: Divide n by t.if the remainder is 0,return the value of t as the answer and
stop,otherwise proceed to step4
Step4 :Decrease the value of t by 1. go to step 2
3
algorithm independent of the hardware/software environment. Therefore theoretical
analysis can be used for analyzing any algorithm
Framework for Analysis
We use a hypothetical model with following assumptions
• Total time taken by the algorithm is given as a function on its input size
• Logical units are identified as one step
• Every step require ONE unit of time
• Total time taken = Total Num. of steps executed
Input’s size: Time required by an algorithm is proportional to size of the problem
instance. For e.g., more time is required to sort 20 elements than what is required to sort 10 elements.
Units for Measuring Running Time: Count the number of times an algorithm‘s basic operation is
executed. (Basic operation: The most important operation of the algorithm, the operation contributing
the most to the total running time.) For e.g., The basic operation is usually the most time-
consuming operation in the algorithm‘s innermost loop.
Consider the following example:
4
fast? How much longer does it take to solve problem of double input size? We can crudely estimate
running time by
T (n) ≈ Cop �C (n)
Where,
T (n): running time as a function of n.
Cop : running time of a single operation.
C (n): number of basic operations as a function of n.
Order of Growth: For order of growth, consider only the leading term of a formula and ignore the
constant coefficient. The following is the table of values of several functions important for analysis of
algorithms.
• Worst-case efficiency: Efficiency (number of times the basic operation will be executed) for
the worst case input of size n. i.e. The algorithm runs the longest among all possible inputs of
size n.
• Best-case efficiency: Efficiency (number of times the basic operation will be executed) for the
best case input of size n. i.e. The algorithm runs the fastest among all possible inputs of size n.
• Average-case efficiency: Average time taken (number of times the basic operation will be
executed) to solve all the possible instances (random) of the input. NOTE: NOT the average of
worst and best case
5
Asymptotic Notations
Asymptotic notation is a way of comparing functions that ignores constant factors and small input
sizes. Three notations used to compare orders of growth of an algorithm‘s basic operation count are:
O, Ω, Θ notations
Big Oh- O notation
Definition:
A function t(n) is said to be in O(g(n)), denoted t(n)=O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t(n) ≤ cg(n) for all n ≥ n0
6
Big Theta- Θ notation
Definition:
A function t (n) is said to be in Θ(g (n)), denoted t(n) = Θ(g (n)), if t (n) is bounded bot h above and
below by some constant multiple of g (n) for all large n, i.e., if there exist some positive constant c1 and
c2 and some nonnegative integer n0 such that
7
Basic Efficiency classes
The time efficiencies of a large number of algorithms fall into only a few classes.
n log n n log n
n2 quadratic
n3 cubic
2n exponential
n! factorial low time efficiency
slow
8
currentMax ← A[i]
return currentMax
Analysis:
1. Input size: number of elements = n (size of the array)
2. Basic operation:
a) Comparison
b) Assignment
3.
NO best, worst, average cases.
4.
Let C (n) denotes number of comparisons: Algorithm makes one comparison on
each execution of the loop, which is repeated for each value of the loop‘s variable
i within the bound between 1 and n – 1.
9
Analysis
1. Input size: number of elements = n (size of the array)
2. Basic operation: Comparison
3. Best, worst, average cases EXISTS.
Worst case input is an array giving largest comparisons.
• Array with no equal elements
• Array with last two elements are the only pair of equal elements
4. Let C (n) denotes number of comparisons in worst case: Algorithm makes one
comparison for each repetition of the innermost loop i.e., for each value of the
loop‘s variable j between its limits i + 1 and n – 1; and this is repeated for each value of the outer
loop i.e, for each value of the loop‘s variable i between its
limits 0 and n – 2
10
Mathematical analysis (Time Efficiency) of recursive Algorithms
General plan for analyzing efficiency of recursive algorithms:
1. Decide on parameter n indicating input size
2. Identify algorithm‘s basic operation
3. Check whether the number of times the basic operation is executed depends only
on the input size n. If it also depends on the type of input, investigate worst,
average, and best case efficiency separately.
4. Set up recurrence relation, with an appropriate initial condition, for the number
of times the algorithm‘s basic operation is executed.
5. Solve the recurrence.
11
return 1
else
return Factorial (n – 1) * n
Analysis:
1. Input size: given number = n
2. Basic operation: multiplication
3. NO best, worst, average cases.
4. Let M (n) denotes number of multiplications.
M (n) = M (n – 1) + 1 for n > 0
M (0) = 0 initial condition
Where: M (n – 1) : to compute Factorial (n – 1)
1 :to multiply Factorial (n – 1) by n
5. Solve the recurrence: Solving using “Backward substitution method”:
M (n) = M (n – 1) + 1
= [ M (n – 2) + 1 ] + 1
= M (n – 2) + 2
= [ M (n – 3) + 1 ] + 3
= M (n – 3) + 3
…
In the ith recursion, we have
= M (n – i ) + i
When i = n, we have
= M (n – n ) + n = M (0 ) + n
Since M (0) = 0
= n
M (n) = Θ (n)
12
Example: Find the number of binary digits in the binary representation of a positive
decimal integer
ALGORITHM BinRec (n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n‟s binary representation
if n = = 1
return 1
else
return BinRec (└ n/2 ┘) + 1
Analysis:
1. Input size: given number = n
2. Basic operation: addition
3. NO best, worst, average cases.
4. Let A (n) denotes number of additions.
A (n) = A (└ n/2 ┘) + 1 for n > 1
A (1) = 0 initial condition
Where: A (└ n/2 ┘) : to compute BinRec (└ n/2 ┘)
1 : to increase the returned value by 1
5. Solve the recurrence:
A (n) = A (└ n/2 ┘) + 1 for n > 1
Assume n = 2k (smoothness rule)
A (2k) = A (2k-1) + 1 for k > 0; A (20) = 0
Solving using “Backward substitution method”:
A (2k) = A (2k-1) + 1
= [A (2k-2) + 1] + 1
= A (2k-2) + 2
= [A (2k-3) + 1] + 2
= A (2k-3) + 3
…
In the ith recursion, we have
= A (2k-i) + i
13
When i = k, we have
= A (2k-k) + k = A (20) + k
Since A (20) = 0
A (2k) = k
Since n = 2k, HENCE k = log2 n
A (n) = log2 n
A (n) = Θ ( log n)
14
swap A[i] an
Example:
Thus, selection sort is a O(n2) algorithm on all inputs. The number of key swaps is only O(n)
or, more precisely, n-1 (one for each repetition of the i loop).This property distinguishes selection sort
positively from many other sorting algorithms.
Bubble Sort
Compare adjacent elements of the list and exchange them if they are out of order.Then we
repeat the process,By doing it repeatedly, we end up ‗bubbling up‘ the largest element to the last
position on the list
ALGORITHM
BubbleSort(A [0..n - 1])
//The algorithm sorts array A[0..n - 1] by bubble sort
//Input: An array A[0..n - 1] of orderable elements
//Output: Array A[0..n - 1] sorted in ascending order
for i=0 to n - 2 do
for j=0 to n - 2 - i do
15
if A[j + 1]<A[j ]
swap A[j ] and A[j + 1]
Example
The first 2 passes of bubble sort on the list 89, 45, 68, 90, 29, 34, 17. A new line is shown after
a swap of two elements is done. The elements to the right of the vertical bar are in their final positions
and are not considered in subsequent iterations of the algorithm
The number of key swaps depends on the input. For the worst case of decreasing arrays, it is
the same as the number of key comparisons.
16
Observation: if a pass through the list makes no exchanges, the list has been sorted and we can
stop the algorithm Though the new version runs faster on some inputs, it is still in O(n2) in the worst
and average cases. Bubble sort is not very good for big set of input. How ever bubble sort is very
simple to code.
General Lesson From Brute Force Approach
A first application of the brute-force approach often results in an algorithm that can be
improved with a modest amount of effort. Compares successive elements of a given list with a given
search key until either a match is encountered (successful search) or the list is exhausted without
finding a match (unsuccessful search)
1. 4 Sequential Search and Brute Force String Matching.
Sequential Search
ALGORITHM SequentialSearch2(A [0..n], K)
//The algorithm implements sequential search with a search key as a sentinel
//Input: An array A of n elements and a search key K
//Output: The position of the first element in A[0..n - 1] whose value is
// equal to K or -1 if no such element is found
A[n]=K
i=0
while A[i] = K do
i=i + 1
if i < n return i
else return
17
substring in the text—such that
ti = p0, . . . , ti+j = pj , . . . , ti+m-1 = pm-1:
t0 . . . ti . . . ti+j . . . ti+m-1 . . . tn-1 text T
p0 . . . pj . . . pm-1 pattern P
1. Pattern: 001011
Text: 10010101101001100101111010
2. Pattern: happy
Text: It is never too late to have a happy childho
The algorithm shifts the pattern almost always after a single character comparison. in the
worst case, the algorithm may have to make all m comparisons before shifting the pattern, and this can
happen for each of the n - m + 1 tries. Thus, in the worst case, the algorithm is in θ(nm).
18
UNIT - 2
DIVIDE & CONQUER
Problem
of size n
Problem Problem
of size n of size n
Solution to
original
problem
NOTE:
19
The base case for the recursion is sub-problem of constant size.
Therefore, the order of growth of T(n) depends on the values of the constants a & b and
the order of growth of the function f(n).
Master theorem
Theorem: If f(n) Є Θ (nd) with d ≥ 0 in recurrence equation
T(n) = aT(n/b) + f(n),
then
Θ (nd) if a < bd
T(n) = Θ (ndlog n) if a = bd
Θ (nlogba ) if a > bd
Example:
20
d=0
Therefore:
a > bd i.e., 2 > 20
Case 3 of master theorem holds good. Therefore:
T(n) Є Θ (nlogba )
Є Θ (nlog22 )
Є Θ (n)
1.3Binary Search
Description:
Binary tree is a dichotomic divide and conquer search algorithm. Ti inspects the middle
element of the sorted list. If equal to the sought value, then the position has been found.
Otherwise, if the key is less than the middle element, do a binary search on the first half,
else on the second half.
Algorithm:
Algorithm can be implemented as recursive or non-recursive algorithm.
l 0
r n-1
while l ≤ r do
m ( l + r) / 2
if key = = A[m]
return m
else
if key < A[m]
r m-1
else
l m+1
return -1
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Depend on
Best – key matched with mid element
Worst – key not found or key sometimes in the list
• Let C(n) denotes the number of times basic operation is executed. Then
Cworst(n) = Worst case efficiency. Since after each comparison the algorithm
divides the problem into half the size, we have
Cworst(n) = Cworst(n/2) + 1 for n > 1
21
C(1) = 1
• Solving the recurrence equation using master theorem, to give the number of
times the search key is compared with an element in the array, we have:
C(n) = C(n/2) + 1
a=1
b=2
f(n) = n0 ; d = 0
case 2 holds:
C(n) = Θ (ndlog n)
= Θ (n0log n)
= Θ ( log n)
Applications of binary search:
• Number guessing game
• Word lists/search dictionary etc
Advantages:
• Efficient on very big list
• Can be implemented iteratively/recursively
Limitations:
• Interacts poorly with the memory hierarchy
• Requires given list to be sorted
• Due to random access of list element, needs arrays instead of linked list.
1.4Merge Sort
Definition:
Merge sort is a sort algorithm that splits the items to be sorted into two groups,
recursively sorts each group, and merges them into a final sorted sequence.
Features:
• Is a comparison based algorithm
• Is a stable algorithm
• Is a perfect example of divide & conquer algorithm design strategy
• It was invented by John Von Neumann
Algorithm:
if n > 1
copy A[0… (n/2 -1)] to B[0… (n/2 -1)]
copy A[n/2… n -1)] to C[0… (n/2 -1)]
Mergesort ( B[0… (n/2 -1)] )
Mergesort ( C[0… (n/2 -1)] )
Merge ( B, C, A )
22
ALGORITHM Merge ( B[0… p-1], C[0… q-1], A[0… p+q-1] )
//merges two sorted arrays into one sorted array
//i/p: arrays B, C, both sorted
//o/p: Sorted array A of elements from B & C
I →0
j→0
k→0
while i < p and j < q do
if B[i] ≤ C[j]
A[k] →B[i]
i→i + 1
else
A[k] →C[j]
j→j + 1
k→k + 1
if i == p
copy C [ j… q-1 ] to A [ k… (p+q-1) ]
else
copy B [ i… p-1 ] to A [ k… (p+q-1) ]
Example:
Apply merge sort for the following list of elements: 6, 3, 7, 8, 2, 4, 5, 1
6 3 7 8 2 4 5 1
6 3 7 8 2 4 5 1
6 3 7 8 2 4 5 1
6 3 7 8 2 4 5 1
3 6 7 8 2 4 1 5
3 6 7 8 1 2 4 5
1 2 3 4 5 6 7 8
23
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists:
Worst case: During key comparison, neither of the two arrays becomes empty
before the other one contains just one element.
• Let C(n) denotes the number of times basic operation is executed. Then
C(n) = 2C(n/2) + Cmerge(n) for n > 1
C(1) = 0
where, Cmerge(n) is the number of key comparison made during the merging stage.
In the worst case:
Cmerge(n) = 2 Cmerge(n/2) + n-1 for n > 1
Cmerge(1) = 0
• Solving the recurrence equation using master theorem:
C(n) = 2C(n/2) + n-1 for n > 1
C(1) = 0
Here a=2
b=2
f(n) = n; d = 1
Therefore 2 = 21, case 2 holds
C(n) = Θ (ndlog n)
= Θ (n1log n)
= Θ (n log n)
Advantages:
• Number of comparisons performed is nearly optimal.
• Mergesort will never degrade to O(n2)
• It can be applied to files of any size
Limitations:
• Uses O(n) additional memory.
1.6 Quick Sort and its performance
Definition:
Quick sort is a well –known sorting algorithm, based on divide & conquer approach. The
steps are:
1. Pick an element called pivot from the list
2. Reorder the list so that all elements which are less than the pivot come before the
pivot and all elements greater than pivot come after it. After this partitioning, the
pivot is in its final position. This is called the partition operation
3. Recursively sort the sub-list of lesser elements and sub-list of greater elements.
Features:
• Developed by C.A.R. Hoare
• Efficient algorithm
• NOT stable sort
• Significantly faster in practice, than other algorithms
24
ALGORITHM Quicksort (A[ l …r ])
//sorts by quick sort
//i/p: A sub-array A[l..r] of A[0..n-1],defined by its left and right indices l and r
//o/p: The sub-array A[l..r], sorted in ascending order
if l < r
Partition (A[l..r]) // s is a split position
Quicksort(A[l..s-1])
Quicksort(A[s+1..r]
ALGORITHM Partition (A[l ..r])
//Partitions a sub-array by using its first element as a pivot
//i/p: A sub-array A[l..r] of A[0..n-1], defined by its left and right indices l and r (l < r)
//o/p: A partition of A[l..r], with the split position returned as this function‘s value
p→A[l]
i→l
j→r + 1;
Repeat
repeat i→i + 1 until A[i] >=p //left-right scan
repeat j→j – 1 until A[j] < p //right-left scan
if (i < j) //need to continue with the scan
swap(A[i], a[j])
until i >= j //no need to scan
swap(A[l], A[j])
return j
Example: Sort by quick sort the following list: 5, 3, 1, 9, 8, 2, 4, 7, show recursion tree.
25
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists:
Best case: when partition happens in the middle of the array each time.
Worst case: When input is already sorted. During key comparison, one half is
empty, while remaining n-1 elements are on the other partition.
• Let C(n) denotes the number of times basic operation is executed in worst case:
Then
C(n) = C(n-1) + (n+1) for n > 1 (2 sub-problems of size 0 and n-1 respectively)
C(1) = 1
Best case:
C(n) = 2C(n/2) + Θ(n) (2 sub-problems of size n/2 each)
NOTE:
The quick sort efficiency in average case is Θ( n log n) on random input.
26
UNIT - 3
THE GREEDY METHOD
3.1 The General Method
3.2 Knapsack Problem
3.3 Job Sequencing with Deadlines
3.4 Minimum-Cost Spanning Trees
3.5 Prim’sAlgorithm
3.6 Kruskal’s Algorithm
3.7 Single Source Shortest Paths.
The method:
• Applicable to optimization problems ONLY
• Constructs a solution through a sequence of steps
• Each step expands a partially constructed solution so far, until a complete solution
to the problem is reached.
On each step, the choice made must be
• Feasible: it has to satisfy the problem‘s constraints
• Locally optimal: it has to be the best local choice among all feasible choices
available on that step
• Irrevocable: Once made, it cannot be changed on subsequent steps of the
algorithm
NOTE:
• Greedy method works best when applied to problems with the greedy-choice
property
• A globally-optimal solution can always be found by a series of local
improvements from a starting configuration.
27
• Optimal solutions:
Change making
Minimum Spanning Tree (MST)
Single-source shortest paths
Huffman codes
• Approximations:
Traveling Salesman Problem (TSP)
Fractional Knapsack problem
28
• Unable to fill the knapsack to capacity, and the empty space lowers the effective value per
pound of the packing
• We must compare the solution to the sub-problem in which the item is included with the
solution to the sub-problem in which the item is excluded before we can make the choice
29
• Consider the jobs in the non increasing order of profits subject to the constraint that the resulting
job sequence J is a feasible solution.
• In the example considered before, the non-increasing profit vector is
(100 27 15 10) (2 1 2 1)
p1 p4 p3 p2 d1 d d3 d2
J = { 1} is a feasible one
J = { 1, 4} is a feasible one with processing sequence ( 4,1)
J = { 1, 3, 4} is not feasible
J = { 1, 2, 4} is not feasible
J = { 1, 4} is optimal
30
• So, we have only to prove that if J is a feasible one, then Σ represents a possible order in which
the jobs may be processed.
• Suppose J is a feasible solution. Then there exists Σ1 = (r1,r2,…,rk) such that
drj ≥ j, 1 ≤ j <k
i.e. dr1 ≥1, dr2 ≥ 2, …, drk ≥ k.
each job requiring an unit time.
31
If all jobs in JU{i} can be completed we insert i into J and verify
by their deadlines D(J®) ≤ r 1 ≤ r ≤ k+1
then J JU{I}
end if
repeat
end greedy-job
Procedure JS(D,J,n,k)
// D(i) ≥ 1, 1≤ i ≤ n are the deadlines //
// the jobs are ordered such that //
// p1 ≥ p2 ≥ ……. ≥ pn //
// in the optimal solution ,D(J(i) ≥ D(J(i+1)) //
// 1 ≤ i ≤ k //
integer D(o:n), J(o:n), i, k, n, r
D(0) J(0) 0
// J(0) is a fictious job with D(0) = 0 //
K1; J(1) 1 // job one is inserted into J //
for i 2 to do // consider jobs in non increasing order of pi //
// find the position of i and check feasibility of insertion //
r k // r and k are indices for existing job in J //
// find r such that i can be inserted after r //
while D(J(r)) > D(i) and D(i) ≠ r do
// job r can be processed after i and //
// deadline of job r is not exactly r //
r r-1 // consider whether job r-1 can be processed after i //
repeat
if D(J(r)) ≥ d(i) and D(i) > r then
// the new job i can come after existing job r; insert i into J at position r+1 //
for I k to r+1 by –1 do
J(I+1) J(l) // shift jobs( r+1) to k right by//
//one position //
repeat
32
if D(J(r)) ≥ d(i) and D(i) > r then
// the new job i can come after existing job r; insert i into J at position r+1 //
for I k to r+1 by –1 do
J(I+1) J(l) // shift jobs( r+1) to k right by//
//one position //
Repeat
COMPLEXITY ANALYSIS OF JS ALGORITHM
• Let n be the number of jobs and s be the number of jobs included in the solution.
• The loop between lines 4-15 (the for-loop) is iterated (n-1)times.
• Each iteration takes O(k) where k is the number of existing jobs.
∴ The time needed by the algorithm is 0(sn) s ≤ n so the worst case time is 0(n2).
If di = n - i+1 1 ≤ i ≤ n, JS takes θ(n2) time
D and J need θ(s) amount of space.
3.4 Minimum-Cost Spanning Trees
Spanning Tree
Spanning tree is a connected acyclic sub-graph (tree) of the given graph (G) that includes
all of G‘s vertices
5 2
d
c 3
33
Minimum Spanning Tree (MST)
Definition:
MST of a weighted, connected graph G is defined as: A spanning tree of G with
minimum total weight.
Example: Consider the example of spanning tree:
For the given graph there are three possible spanning trees. Among them the spanning
tree with the minimum weight 6 is the MST for the given graph
Algorithm:
ALGORITHM Prim (G)
//Prim‘s algorithm for constructing a MST
//Input: A weighted connected graph G = { V, E }
//Output: ET the set of edges composing a MST of G
// the set of tree vertices can be initialized with any vertex
VT → { v0}
ET → Ø
for i→ 1 to |V| - 1 do
Find a minimum-weight edge e* = (v*, u*) among all the edges (v, u) such
that v is in VT and u is in V - VT
VT → VT U { u*}
34
ET → ET U { e*}
return ET
STEP 1: Start with a tree, T0, consisting of one vertex
STEP 2: ―G row‖ tree one vertex/edge at a time
• Construct a series of expanding sub-trees T1, T2, … Tn-1.
• At each stage construct Ti + 1 from Ti by adding the minimum weight edge
connecting a vertex in tree (Ti) to one vertex not yet in tree, choose from
“fringe” edges (this is the “greedy” step!)
Algorithm stops when all vertices are included
Example:
Apply Prim‘s algorithm for the following graph to find MST.
1
b c
3 4 4 6
a 5 f 5
d
2
6 8
e
Solution:
1
c(b,1) b c
3
d(-,∞)
b ( a, 3 )
e(a,6) a
f(b,4)
1 c
b
d(c,6) 3
c ( b, 1 ) e(a,6) 4
f(b,4)
a f
35
1
b c
3 4
d(f,5) a f
f ( b, 4)
e(f,2)
2
1
b c
3 4
e ( f, 2) d(f,5) a f 5
d
2
Efficiency:
Efficiency of Prim‘s algorithm is based on data structure used to store priority queue.
• Unordered array: Efficiency: Θ(n2)
• Binary heap: Efficiency: Θ(m log n)
• Min-heap: For graph with n nodes and m edges: Efficiency: (n + m) log n
Conclusion:
• Prim‘s algorithm is a ―vertex based algorithm‖
• Prim‘s algorithm ― Needs priority queue for locating the nearest vertex.‖ The
choice of priority queue matters in Prim implementation.
o Array - optimal for dense graphs
o Binary heap - better for sparse graphs
o Fibonacci heap - best in theory, but not in practice
36
3.6 Kruskal’s Algorithm
Algorithm:
The method:
STEP 1: Sort the edges by increasing weight
STEP 2: Start with a forest having |V| number of trees.
STEP 3: Number of trees are reduced by ONE at every inclusion of an edge
At each stage:
• Among the edges which are not yet included, select the one with minimum
weight AND which does not form a cycle.
• the edge will reduce the number of trees by one by combining two trees of
the forest
Algorithm stops when |V| -1 edges are included in the MST i.e : when the number of
trees in the forest is reduced to ONE.
37
Example:
Apply Kruskal‘s algorithm for the following graph to find MST.
1
b c
3 4 4 6
a 5 f 5
d
2
6 8
e
Solution:
The list of edges is:
Edge ab af ae bc bf cf cd df de ef
Weight 3 5 6 1 4 4 6 5 8 2
1
bc
Edge b c
1
Weight a
f
d
Insertion
YES
status
Insertion e
1
order
ef 1
Edge b c
2
Weight a f d
Insertion
YES
status 2
Insertion e
2
order
38
ab 1
Edge 3 b c
3
Weight a f d
Insertion
YES
status 2
Insertion e
3
order
bf 1
Edge 3 b c
4
Weight a 4 f d
Insertion
YES
status 2
Insertion e
4
order
Edge cf
Weight 4
Insertion
NO
status
Insertion
-
order
Edge af
Weight 5
Insertion
NO
status
Insertion
-
order
df 1
Edge
3 b c
5
Weight
4
Insertion a f d
YES
status 5
2
Insertion
5 e
order
Algorithm stops as |V| -1 edges are included in the MST
39
Efficiency:
Efficiency of Kruskal‘s algorithm is based on the time needed for sorting the edge
weights of a given graph.
• With an efficient sorting algorithm: Efficiency: Θ(|E| log |E| )
Conclusion:
• Kruskal‘s algorithm is an ―e
dge based algorithm‖
• Prim‘s algorithm with a heap is faster than Kruskal‘s algorithm.
3.7 Single Source Shortest Paths.
VT→0
40
for i→0 to |V| - 1 do
u*→DeleteMin(Q)
//expanding the tree, choosing the locally best vertex
VT→VT U {u*}
for every vertex u in V – VT that is adjacent to u* do
if Du* + w (u*, u) < Du
Du→Du + w (u*, u); Pu u*
Decrease(Q, u, Du)
The method
Dijkstra‘s algorithm solves the single source shortest path problem in 2 stages.
Stage 1: A greedy algorithm computes the shortest distance from source to all other
nodes in the graph and saves in a data structure.
Stage 2 : Uses the data structure for finding a shortest path from source to any vertex v.
• At each step, and for each vertex x, keep track of a “distance” D(x)
and a directed path P(x) from root to vertex x of length D(x).
• Scan first from the root and take initial paths P( r, x ) = ( r, x ) with
D(x) = w( rx ) when rx is an edge,
D(x) = ∞ when rx is not an edge.
For each temporary vertex y distinct from x, set
D(y) = min{ D(y), D(x) + w(xy) }
Example:
Apply Dijkstra‘s algorithm to find Single source shortest paths with vertex a as the
source.
1
b c
3 4 4 6
a 5 f 5
d
2
6 8
e
Solution:
Length Dv of shortest path from source (s) to other vertices v and Penultimate vertex Pv
for every vertex v in V:
Da = 0 , Pa = null
Db = ∞ , Pb = null
Dc = ∞ , Pc = null
Dd = ∞ , Pd = null
De = ∞ , Pe = null
Df = ∞ , Pf = null
41
Tree RemainingDistance & Path Graph
vertices vertices vertex
Da = 0 Pa = a
b ( a , 3D)b = 3 Pb = [ a, b ] b
c ( - , ∞Dc
) = ∞ Pc = null 3
a ( -, 0 ) d ( - , ∞ ) = ∞ Pd = null
Dd a
e ( a , 6De
) = 6 Pe = [ a, e ]
f(a,5) =5
Df Pf = [ a, f ]
Da = 0 Pa = a 1
c ( b , 3+1Db)= 3 Pb = [ a, b ] b c
3
d ( - , ∞Dc) = 4 Pc = [a,b,c]
b ( a, 3 )
e ( a , 6Dd) = ∞ Pd = null a
f ( a , 5De
) = 6 Pe = [ a, e ]
Df = 5 Pf = [ a, f ]
Da = 0 Pa = a
Db = 3 Pb = [ a, b ]
d ( c , 4+6 ) 5
Dc = 4 Pc = [a,b,c]
c ( b, 4 ) e ( a , 6)Dd=10 a
Pd = [a,b,c,d] f
f(a,5)
De = 6 Pe = [ a, e ]
Df = 5 Pf = [ a, f ]
Da = 0 Pa = a
Db = 3 Pb = [ a, b ]
Dc = 4 Pc = [a,b,c] a
d ( c , 10D)d =10 Pd = [a,b,c,d]
f ( a, 5) 6
e ( a , 6De
) = 6 Pe = [ a, e ]
e
Df = 5 Pf = [ a, f ]
Da = 0 Pa = a 1
Db = 3 Pb = [ a, b ] b c
Dc = 4 Pc = [a,b,c]
e ( a, 6) d ( c, 10Dd=10
) Pd = [a,b,c,d] 3 6
De = 6 Pe = [ a, e ] d
Df = 5 Pf = [ a, f ] a
Conclusion:
• Doesn‘t work with negative weights
• Applicable to both undirected and directed graphs
• Use unordered array to store the priority queue: Efficiency = Θ(n2)
• Use min-heap to store the priority queue: Efficiency = O(m log n)
42
UNIT - 4
Dynamic Programming
4.1 The General Method
4.2 Warshall’s Algorithm
4.3 Floyd’s Algorithm for the All-Pairs Shortest Paths Problem
4.4 Single-Source Shortest Paths
4.5 General Weights 0/1 Knapsack
4.6 The Traveling Salesperson problem.
4.1 The General Method
Definition
Dynamic programming (DP) is a general algorithm design technique for solving
problems with overlapping sub-problems. This technique was invented by American
mathematician ―R
ichard Bellman‖ in 1950s.
Key Idea
The key idea is to save answers of overlapping smaller sub-problems to avoid re-
computation.
Dynamic Programming Properties
• An instance is solved using the solutions for smaller instances.
• The solutions for a smaller instance might be needed multiple times, so store their
results in a table.
• Thus each smaller instance is solved only once.
• Additional space is used to save time.
Dynamic Programming vs. Divide & Conquer
LIKE divide & conquer, dynamic programming solves problems by combining solutions
to sub-problems. UNLIKE divide & conquer, sub-problems are NOT independent in
dynamic programming.
43
Dynamic Programming vs. Divide & Conquer: EXAMPLE
Computing Fibonacci Numbers
0 if n=0
F(n) = 1 if n=1
F(n-1) + F(n-2) if n >1
Algorithm F(n)
// Computes the nth Fibonacci number recursively by using its definitions
// Input: A non-negative integer n
// Output: The nth Fibonacci number
if n==0 || n==1 then
return n
else
return F(n-1) + F(n-2)
F(n)
F(n-1) + F(n-2)
44
Rules of Dynamic Programming
1. OPTIMAL SUB-STRUCTURE: An optimal solution to a problem contains
optimal solutions to sub-problems
2. OVERLAPPING SUB-PROBLEMS: A recursive solution contains a ―s mall‖
number of distinct sub-problems repeated many times
3. BOTTOM UP FASHION: Computes the solution in a bottom-up fashion in the
final step
45
• Let A denote the initial boolean matrix.
• The element r(k) [ i, j] in ith row and jth column of matrix Rk (k = 0, 1, …, n) is
equal to 1 if and only if there exists a directed path from ith vertex to jth vertex
with intermediate vertex if any, numbered not higher than k
• Recursive Definition:
• Case 1: A path from vi to vj restricted to using only vertices from
{v1,v2,…,vk} as intermediate vertices does not use vk, Then
R(k) [ i, j ] = R(k-1) [ i, j ].
• Case 2: A path from vi to vj restricted to using only vertices from
{v1,v2,…,vk} as intermediate vertices do use vk. Then
R(k) [ i, j ] = R(k-1) [ i, k ] AND R(k-1) [ k, j ].
R(k)[ i, j ] = R(k-1) [ i, j ] OR (R(k-1) [ i, k ] AND R(k-1) [ k, j ] )
Algorithm:
Algorithm Warshall(A[1..n, 1..n])
// Computes transitive closure matrix
// Input: Adjacency matrix A
// Output: Transitive closure matrix R
R(0) A
for k→1 to n do
for i→ 1 to n do
for j→ 1 to n do
R(k)[i, j]→ R(k-1)[i, j] OR (R(k-1)[i, k] AND R(k-1)[k, j] )
return R(n)
Find Transitive closure for the given digraph using Warshall‘s algorithm.
A C
D
B
Solution:
A B C D
R(0) = A 0 0 1 0
B 1 0 0 1
C 0 0 0 0
D 0 1 0 0
46
R(0) k=1 A B C D A B C D
Vertex 1 A 0 0 1 0 A 0 0 1 0
can be B 1 0 0 1 B 1 0 1 1
intermediate C 0 0 0 0 C 0 0 0 0
node D 0 1 0 0 D 0 1 0 0
R1[2,3]
= R0[2,3] OR
R0[2,1] AND R0[1,3]
= 0 OR ( 1 AND 1)
=1
R(1) k=2 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2 } can B 1 0 1 1 B 1 0 1 1
be C 0 0 0 0 C 0 0 0 0
intermediate D 0 1 0 0 D 1 1 1 1
nodes
R2[4,1]
= R1[4,1] OR
R1[4,2] AND R1[2,1]
= 0 OR ( 1 AND 1)
=1
R2[4,3]
= R1[4,3] OR
R1[4,2] AND R1[2,3]
= 0 OR ( 1 AND 1)
=1
R2[4,4]
= R1[4,4] OR
R1[4,2] AND R1[2,4]
= 0 OR ( 1 AND 1)
=1
R(2) k=3 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2,3 } can B 1 0 1 1 B 1 0 1 1
be C 0 0 0 0 C 0 0 0 0
intermediate D 1 1 1 1 D 1 1 1 1
nodes
NO CHANGE
47
R(3) k=4 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2,3,4 } B 1 0 1 1 B 1 1 1 1
can be C 0 0 0 0 C 0 0 0 0
intermediate D 1 1 1 1 D 1 1 1 1
nodes
R4[2,2]
= R3[2,2] OR
R3[2,4] AND R3[4,2]
= 0 OR ( 1 AND 1)
=1
R(4) A B C D
A 0 0 1 0 TRANSITIVE CLOSURE
B 1 1 1 1 for the given graph
C 0 0 0 0
D 1 1 1 1
Efficiency:
• Time efficiency is Θ(n3)
• Space efficiency: Requires extra space for separate matrices for recording
intermediate results of the algorithm.
Problem statement:
Given a weighted graph G( V, Ew), the all-pairs shortest paths problem is to find the
shortest path between every pair of vertices ( vi, vj ) Є V.
Solution:
A number of algorithms are known for solving All pairs shortest path problem
• Matrix multiplication based algorithm
• Dijkstra's algorithm
• Bellman-Ford algorithm
• Floyd's algorithm
48
Underlying idea of Floyd’s algorithm:
• Let W denote the initial weight matrix.
• Let D(k) [ i, j] denote cost of shortest path from i to j whose intermediate vertices
are a subset of {1,2,…,k}.
• Recursive Definition
Case 1:
A shortest path from vi to vj restricted to using only vertices from {v1,v2,…,vk}
as intermediate vertices does not use vk. Then
D(k) [ i, j ] = D(k-1) [ i, j ].
Case 2:
A shortest path from vi to vj restricted to using only vertices from {v1,v2,…,vk}
as intermediate vertices do use vk. Then
D(k) [ i, j ] = D(k-1) [ i, k ] + D(k-1) [ k, j ].
We conclude:
D(k)[ i, j ] = min { D(k-1) [ i, j ], D(k-1) [ i, k ] + D(k-1) [ k, j ] }
Algorithm:
Algorithm Floyd(W[1..n, 1..n])
// Implements Floyd‘s algorithm
// Input: Weight matrix W
// Output: Distance matrix of shortest paths‘ length
D W
for k → 1 to n do
for i→ 1 to n do
for j→ 1 to n do
D [ i, j]→ min { D [ i, j], D [ i, k] + D [ k, j]
return D
Example:
Find All pairs shortest paths for the given weighted connected graph using Floyd‘s
algorithm.
A 5
4 2 C
B
3
Solution:
D(0) =
A B C
A
0 2 5
B
4 0 ∞
C
∞ 3 0
49
D(0) k=1
Vertex 1
can be
intermediate A B C A B C
node A 0 2 5 A 0 2 5
B 4 0 ∞ B 4 0 9
C ∞ 3 0 C ∞ 3 0
D1[2,3]
= min { D0 [2,3],
D0 [2,1] + D0 [1,3] }
D(1) k=2 = min { ∞, ( 4 + 5) }
Vertex 1,2 =9
can be
intermediate A B C A B C
nodes A 0 2 5 A 0 2 5
B 4 0 9 B 4 0 9
C ∞ 3 0 C 7 3 0
D2[3,1]
= min { D1 [3,1],
D1 [3,2] + D1 [2,1] }
D(2) k=3 = min { ∞, ( 4 + 3) }
Vertex 1,2,3 =7
can be
intermediate A B C A B C
nodes A 0 2 5 A 0 2 5
B 4 0 9 B 4 0 9
C 7 3 0 C 7 3 0
D(3)
NO Change
A B C
A 0 2 5 ALL PAIRS SHORTEST
B 4 0 9 PATHS for the given
C 7 3 0 graph
50
4.40/1 Knapsack Problem Memory function
Definition:
Given a set of n items of known weights w1,…,wn and values v1,…,vn and a knapsack
of capacity W, the problem is to find the most valuable subset of the items that fit into the
knapsack.
Knapsack problem is an OPTIMIZATION PROBLEM
Step 2:
Recursively define the value of an optimal solution in terms of solutions to smaller
problems.
Initial conditions:
V[ 0, j ] = 0 for j ≥ 0
V[ i, 0 ] = 0 for i ≥ 0
Recursive step:
max { V[ i-1, j ], vi +V[ i-1, j - wi ] }
V[ i, j ] = if j - wi ≥ 0
V[ i-1, j ] if j - wi < 0
Step 3:
Bottom up computation using iteration
Question:
Apply bottom-up dynamic programming algorithm to the following instance of the
knapsack problem Capacity W= 5
Solution:
Using dynamic programming approach, we have:
51
Step Calculation Table
1 Initial conditions:
V[ 0, j ] = 0 for j ≥ 0 V[i,j] j=0 1 2 3 4 5
V[ i, 0 ] = 0 for i ≥ 0 i=0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
2 W1 = 2,
Available knapsack capacity = 1 V[i,j] j=0 1 2 3 4 5
W1 > WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0
V[ 1,1] = V[ 0, 1 ] = 0 2 0
3 0
4 0
3 W1 = 2,
Available knapsack capacity = 2 V[i,j] j=0 1 2 3 4 5
W1 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3
vi +V[ i-1, j - wi ] } 2 0
V[ 1,2] = max { V[ 0, 2 ], 3 0
3 +V[ 0, 0 ] } 4 0
= max { 0, 3 + 0 } = 3
4 W1 = 2,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
3,4,5 i=0 0 0 0 0 0 0
W1 < WA, CASE 2 holds: 1 0 0 3 3 3 3
V[ i, j ] = max { V[ i-1, j ], 2 0
vi +V[ i-1, j - wi ] } 3 0
V[ 1,3] = max { V[ 0, 3 ], 0
4
3 +V[ 0, 1 ] }
= max { 0, 3 + 0 } = 3
5 W2 = 3,
Available knapsack capacity = 1 V[i,j] j=0 1 2 3 4 5
W2 >WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0 3 3 3 3
V[ 2,1] = V[ 1, 1 ] = 0 2 0 0
3 0
4 0
52
6 W2 = 3,
Available knapsack capacity = 2 V[i,j] j=0 1 2 3 4 5
W2 >WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0 3 3 3 3
V[ 2,2] = V[ 1, 2 ] = 3 2 0 0 3
3 0
4 0
7 W2 = 3,
Available knapsack capacity = 3 V[i,j] j=0 1 2 3 4 5
W2 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4
V[ 2,3] = max { V[ 1, 3 ], 3 0
4 +V[ 1, 0 ] } 4 0
= max { 3, 4 + 0 } = 4
8 W2 = 3,
Available knapsack capacity = 4 V[i,j] j=0 1 2 3 4 5
W2 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4
V[ 2,4] = max { V[ 1, 4 ], 3 0
4 +V[ 1, 1 ] } 4 0
= max { 3, 4 + 0 } = 4
9 W2 = 3,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W2 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 2,5] = max { V[ 1, 5 ], 3 0
4 +V[ 1, 2 ] } 4 0
= max { 3, 4 + 3 } = 7
10 W3 = 4,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
1,2,3 i=0 0 0 0 0 0 0
W3 > WA, CASE 1 holds: 1 0 0 3 3 3 3
V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4
4 0
53
11 W3 = 4,
Available knapsack capacity = 4 V[i,j] j=0 1 2 3 4 5
W3 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 3,4] = max { V[ 2, 4 ], 3 0 0 3 4 5
5 +V[ 2, 0 ] } 4 0
= max { 4, 5 + 0 } = 5
12 W3 = 4,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W3 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 3,5] = max { V[ 2, 5 ], 3 0 0 3 4 5 7
5 +V[ 2, 1 ] } 0
4
= max { 7, 5 + 0 } = 7
13 W4 = 5,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
1,2,3,4 i=0 0 0 0 0 0 0
W4 < WA, CASE 1 holds: 1 0 0 3 3 3 3
V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5
14 W4 = 5,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W4 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 4,5] = max { V[ 3, 5 ], 3 0 0 3 4 5 7
6 +V[ 3, 0 ] } 4 0 0 3 4 5 7
= max { 7, 6 + 0 } = 7
54
Step Table Remarks
1
V[i,j] j=0 1 2 3 4 5 V[ 4, 5 ] = V[ 3, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 4 NOT included in the
2 0 0 3 4 4 7 subset
3 0 0 3 4 5 7
4 0 0 3 4 5 7
2
V[i,j] j=0 1 2 3 4 5 V[ 3, 5 ] = V[ 2, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 3 NOT included in the
2 0 0 3 4 4 7 subset
3 0 0 3 4 5 7
4 0 0 3 4 5 7
3
V[i,j] j=0 1 2 3 4 5 V[ 2, 5 ] ≠ V[ 1, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 2 included in the subset
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
4 Since item 2 is included in the knapsack:
Weight of item 2 is 3kg, therefore,
remaining capacity of the knapsack is
(5 - 3 =) 2kg V[ 1, 2 ] ≠ V[ 0, 2 ]
55
Efficiency:
• Running time of Knapsack problem using dynamic programming algorithm is:
O( n * W )
• Time needed to find the composition of an optimal solution is: O( n + W )
Memory function
The method:
• Uses top-down manner.
• Maintains table as in bottom-up approach.
• Initially, all the table entries are initialized with special ―null‖ symbol to indicate
that they have not yet been calculated.
• Whenever a new value needs to be calculated, the method checks the
corresponding entry in the table first:
• If entry is NOT ―n ull‖, it is simply retrieved from the table.
• Otherwise, it is computed by the recursive call whose result is then recorded in
the table.
Algorithm:
Algorithm MFKnap( i, j )
if V[ i, j] < 0
if j < Weights[ i ]
value → MFKnap( i-1, j )
else
value → max {MFKnap( i-1, j ),
Values[i] + MFKnap( i-1, j - Weights[i] )}
V[ i, j ]→ value
return V[ i, j]
Example:
Apply memory function method to the following instance of the knapsack problem
Capacity W= 5
Solution:
Using memory function approach, we have:
56
Computation Remarks
1 Initially, all the table entries are initialized
with special ―n ull‖ symbol to indicate that V[i,j] j=0 1 2 3 4 5
they have not yet been calculated. Here i=0 0 0 0 0 0 0
null is indicated with -1 value. 1 0 -1 -1 -1 -1 -1
2 0 -1 -1 -1 -1 -1
3 0 -1 -1 -1 -1 -1
4 0 -1 -1 -1 -1 -1
2 MFKnap( 4, 5 )
V[ 1, 5 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
5 + MFKnap( 2, 1 )
i=0 0 0 0 0 0 0
MFKnap( 2, 5 )
1 0 -1 -1 -1 -1 3
2 0 -1 -1 -1 -1 -1
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 3 0 -1 -1 -1 -1 -1
0 3
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 5 ) 3 + MFKnap( 0, 3 )
0 3+0
3 MFKnap( 4, 5 )
V[ 1, 2 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
i=0 0 0 0 0 0 0
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
1 0 -1 3 -1 -1 3
2 0 -1 -1 -1 -1 -1
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 )
3
3 0 -1 -1 -1 -1 -1
3 0
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 2 ) 3 + MFKnap( 0, 0 )
0 3+0
4 MFKnap( 4, 5 )
V[ 2, 5 ] = 7
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
i=0 0 0 0 0 0 0
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 ) 1 0 -1 3 -1 -1 3
3 7
2 0 -1 -1 -1 -1 7
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 3 0 -1 -1 -1 -1 -1
3 3
4 0 -1 -1 -1 -1 -1
57
5 MFKnap( 4, 5 )
V[ 2, 1 ] = 0
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 ) V[ 3, 5 ] = 7
5
7
V[i,j] j=0 1 2 3 4 5
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
7 i=0 0 0 0 0 0 0
0
1 0 0 3 -1 -1 3
MFKnap( 1, 1 )
2 0 0 -1 -1 -1 7
0
3 0 -1 -1 -1 -1 7
MFKnap( 0, 1 )
4 0 -1 -1 -1 -1 -1
0
6 7
MFKnap( 4, 5 )
V[ 4, 5 ] = 7
7 6
V[i,j] j=0 1 2 3 4 5
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 ) i=0 0 0 0 0 0 0
7 0 1 0 0 3 -1 -1 3
2 0 0 -1 -1 -1 7
3 0 -1 -1 -1 -1 7
4 0 -1 -1 -1 -1 7
Conclusion:
Optimal subset: { item 1, item 2 }
Efficiency:
• Time efficiency same as bottom up algorithm: O( n * W ) + O( n + W )
• Just a constant factor gain by using memory function
• Less space efficient than a space efficient version of a bottom-up algorithm
58
UNIT-5
DECREASE-AND-CONQUER APPROACHES, SPACE-TIMETRADEOFFS
5.1 INTRODUCTION:
Decrease & conquer is a general algorithm design strategy based on exploiting the
relationship between a solution to a given instance of a problem and a solution to a
smaller instance of the same problem. The exploitation can be either top-down
(recursive) or bottom-up (non-recursive).
59
Decrease by a constant factor (usually by half)
Description:
Insertion sort is an application of decrease & conquer technique. It is a comparison based
sort in which the sorted array is built on one entry at a time
Algorithm:
ALGORITHM Insertionsort(A [0 … n-1] )
//sorts a given array by insertion sort
//i/p: Array A[0…n-1]
//o/p: sorted array A[0…n-1] in ascending order
for i 1 to n-1
V A[i]
j i-1
60
while j ≥ 0 AND A[j] > V do
A[j+1] A[j]
j j–1
A[j + 1] V
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists
Best case: when input is a sorted array in ascending order:
Worst case: when input is a sorted array in descending order:
• Let Cworst(n) be the number of key comparison in the worst case. Then
Example:
Sort the following list of elements using insertion sort:
89, 45, 68, 90, 29, 34, 17
89 45 68 90 29 34 17
45 89 68 90 29 34 17
45 68 89 90 29 34 17
45 68 89 90 29 34 17
29 45 68 89 90 34 17
29 34 45 68 89 90 17
17 29 34 45 68 89 90
61
5.3 DEPTH-FIRST SEARCH (DFS) AND BREADTH-FIRST SEARCH (BFS)
DFS and BFS are two graph traversing algorithms and follow decrease and conquer
approach – decrease by one variation to traverse the graph
Algorithm:
ALGORITHM DFS (G)
//implements DFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: DFS tree
dfs(v)
count count + 1
mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
62
Example:
Starting at vertex A traverse the following graph using DFS traversal method:
A B C D
E F G H
Solution:
2
A B Insert B into stack
B (2)
A(1)
3
A B Insert F into stack
F (3)
F B (2)
A(1)
4
Insert E into stack
A B
E (4)
F (3)
E F B (2)
A(1)
5 NO unvisited adjacent vertex for E, backtrack Delete E from stack
E (4, 1)
F (3)
B (2)
A(1)
63
6 NO unvisited adjacent vertex for F, backtrack Delete F from stack
E (4, 1)
F (3, 2)
B (2)
A(1)
7
Insert G into stack
A B
E (4, 1)
F (3, 2) G (5)
E F G B (2)
A(1)
8
A B C Insert C into stack
E (4, 1) C (6)
E F G F (3, 2) G (5)
B (2)
A(1)
9
C D Insert D into stack
A B
D (7)
G E (4, 1) C (6)
E F F (3, 2) G (5)
B (2)
A(1)
10
A B C D Insert H into stack
H (8)
D (7)
E F G H E (4, 1) C (6)
F (3, 2) G (5)
B (2)
A(1)
11 NO unvisited adjacent vertex for H, backtrack
Delete H from stack
H (8, 3)
D (7)
E (4, 1) C (6)
F (3, 2) G (5)
B (2)
A(1)
64
12 NO unvisited adjacent vertex for D, backtrack Delete D from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6)
F (3, 2) G (5)
B (2)
A(1)
13 NO unvisited adjacent vertex for C, backtrack Delete C from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5)
B (2)
A(1)
14 NO unvisited adjacent vertex for G, backtrack Delete G from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5, 6)
B (2)
A(1)
15 NO unvisited adjacent vertex for B, backtrack Delete B from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5, 6)
B (2, 7)
A(1)
16 NO unvisited adjacent vertex for A, backtrack Delete A from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5, 6)
B (2, 7)
A(1, 8)
Stack becomes empty. Algorithm stops as all the
nodes in the given graph are visited
65
A
F G
E C
Applications of DFS:
• The two orderings are advantageous for various applications like topological
sorting, etc
• To check connectivity of a graph (number of times stack becomes empty tells the
number of components in the graph)
• To check if a graph is acyclic. (no back edges indicates no cycle)
• To find articulation point in a graph
Efficiency:
• Depends on the graph representation:
o Adjacency matrix : Θ(n2)
o Adjacency list: Θ(n + e)
Breadth-first search (BFS)
Description:
• BFS starts visiting vertices of a graph at an arbitrary vertex by marking it as
visited.
• It visits graph‘s vertices by across to all the neighbors of the last visited vertex
• Instead of a stack, BFS uses a queue
• Similar to level-by-level tree traversal
• ―Redraws‖ graph in tree-like fashion (with tree edges and cross edges for
undirected graph)
Algorithm:
ALGORITHM BFS (G)
//implements BFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: BFS tree/forest
Mark each vertex in V with 0 as a mark of being ―
unvisited‖
66
count 0
for each vertex v in V do
if v is marked with 0
bfs(v)
bfs(v)
count count + 1
mark v with count and initialize a queue with v
while the queue is NOT empty do
for each vertex w in V adjacent to front‘s vertex v do
if w is marked with 0
count count + 1
mark w with count
add w to the queue
remove vertex v from the front of the queue
Example:
Starting at vertex A traverse the following graph using BFS traversal method:
A B C D
E F G H
Solution:
2
A Insert B, E into queue
B
A(1), B (2), E(3)
B (2), E(3)
E
3
A B Insert F, G into queue
67
74 NO unvisited adjacent vertex for E, backtrack Delete E from queue
F(3), G(4)
5 NO unvisited adjacent vertex for F, backtrack Delete F from queue
G(4)
6
A B C Insert C, H into queue
B E
F G
C H
68
Applications of BFS:
• To check connectivity of a graph (number of times queue becomes empty tells the
number of components in the graph)
• To check if a graph is acyclic. (no cross edges indicates no cycle)
• To find minimum edge path in a graph
Efficiency:
• Depends on the graph representation:
o Array : Θ(n2)
o List: Θ(n + e)
DFS BFS
Data structure Stack Queue
No. of vertex orderings 2 orderings 1 ordering
Edge types Tree edge Tree edge
Back edge Cross edge
Applications Connectivity Connectivity
Acyclicity Acyclicity
Articulation points Minimum edge paths
Efficiency for Θ(n2) Θ(n2)
adjacency matrix
Efficiency for Θ(n + e) Θ(n + e)
adjacency lists
NOTE: There is no solution for topological sorting if there is a cycle in the digraph .
[MUST be a DAG]
DFS Method:
• Perform DFS traversal and note the order in which vertices become dead ends
(popped order)
• Reverse the order, yield the topological sorting.
Example:
69
Apply DFS – based algorithm to solve the topological sorting problem for the given
graph:
C4
C1
C3
C2 C5
2
C1 Insert C2 into stack
C3
C2 (2)
C1(1)
C5 (4, 1)
C4 (3)
C2 (2)
C1(1)
6 NO unvisited adjacent vertex for C4, backtrack Delete C4 from stack
C5 (4, 1)
C4 (3, 2)
C2 (2)
C1(1)
70
7 NO unvisited adjacent vertex for C3, backtrack Delete C3 from stack
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1)
8 NO unvisited adjacent vertex for C1, backtrack Delete C1 from stack
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4)
Stack becomes empty, but there is a node which is unvisited, therefore start the DFS
again from arbitrarily selecting a unvisited node as source
9 Insert C2 into stack
C2
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4) C2(5)
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4) C2(5, 5)
Stack becomes empty, NO unvisited node left, therefore algorithm stops.
The popping – off order is:
C5, C4, C3, C1, C2,
Topologically sorted list (reverse of pop order):
C2, C1 C3 C4 C5
Source removal method:
• Purely based on decrease & conquer
• Repeatedly identify in a remaining digraph a source, which is a vertex with no
incoming edges
• Delete it along with all the edges outgoing from it.
Example:
Apply Source removal – based algorithm to solve the topological sorting problem for the
given graph:
C4
C1
C3
C2 C5
71
Solution:
Delete C1 C4
C4
C1
C3
C3
C5
C2 C5 C2
Delete C2 C4 C4
C3
C5 C5
Delete C4 Delete C5
C5
72
5.5 SPACE-TIME TRADEOFFS:
Introduction
Two varieties of space-for-time algorithms:
• input enhancement — preprocess the input (or its part) to store some info
to be used later in solving the problem
• counting sorts
• string searching algorithms
Algorithm:
73
5.7 INPUT ENHANCEMENT IN STRING MATCHING.
Horspool’s Algorithm
A simplified version of Boyer-Moore algorithm: preprocesses pattern to
generate a shift table that determines how much to shift the patter when a
mismatch occurs. Always makes a shift based on the text‘s character c aligned
with the last compared (mismatched) character in the pattern according to the shift
table‘s entry for c
void horspoolInitocc()
{
int j;
char a;
74
}
}
void horspoolSearch()
{
int i=0, j;
while (i<=n-m)
{
j=m-1;
while (j>=0 && p[j]==t[i+j]) j--;
if (j<0) report(i);
i+=m-1;
i-=occ[t[i]];
}
}
75
Time complexity
• the average number of comparisons for one text character is between 1/п and 2/(п+1).
76
UNIT 6
LIMITATIONS OF ALGORITHMIC POWER AND COPING WITH THEM
6.1LOWER-BOUND ARGUMENTS
6.2DECISION TREES
6.3 P, NP, AND NP-COMPLETE PROBLEMS
Objectives
We now move into the third and final major theme for this course.
1. Tools for analyzing algorithms.
2. Design strategies for designing algorithms.
3. Identifying and coping with the limitations of algorithms.
Efficiency of an algorithm
• By establishing the asymptotic efficiency class
• The efficiency class for selection sort (quadratic) is lower. Does this mean that
selection sort is a ―
better‖ algorithm?
– Like comparing ―a pples‖ to ―o
ranges‖
• By analyzing how efficient a particular algorithm is compared to other algorithms for
the same problem
– It is desirable to know the best possible efficiency any algorithm solving
This problem may have – establishing a lower bound
Lower bound: an estimate on a minimum amount of work needed to solve a given problem
Examples:
• number of comparisons needed to find the largest element in a set of n numbers
• number of comparisons needed to sort an array of size n
• number of comparisons necessary for searching in a sorted array
• number of multiplications needed to multiply two n-by-n matrices
Lower bound can be
– an exact count
– an efficiency class (Ω)
• Tight lower bound: there exists an algorithm with the same efficiency as the lower
bound
77
Problem Lower bound Tightness
sorting Ω(nlog n) yes
searching in a sorted array Ω(log n) yes
element uniqueness Ω(nlog n) yes
n-digit integer multiplication Ω(n) unknown
multiplication of n-by-n matrices Ω(n2) unknown
78
Deriving a Lower Bound from Decision Trees
• How does such a tree help us find lower bounds?
– There must be at least one leaf for each correct output.
– The tree must be tall enough to have that many leaves.
• In a binary tree with l leaves and height h,
h ≥ log2 l
Decision Tree and Sorting Algorithms
79
Decision Tree and Searching a Sorted Array
• Number of leaves (outcomes) = n + n+1 = 2n+1
• Height of ternary tree with 2n+1 leaves ≥ log 3 (2n+1)
• This lower bound is NOT tight (the number of worst-case comparisons for binary search is
log2 (n+1), and log3 (2n+1) ≤ log2 (n+1))
• Can we find a better lower bound or find an algorithm with better efficiency than binary
search?
Decision-tree example
80
Decision-tree model
A decision tree can model the execution of any comparison sort:
• One tree for each input size n.
• View the algorithm as splitting whenever it compares two elements.
• The tree contains the comparisons along all possible instruction traces.
• The running time of the algorithm = the length of the path taken.
• Worst-case running time = height of tree.
81
82
Decision Tree Model
• In the insertion sort example, the decision tree reveals all possible key comparison
sequences for 3 distinct numbers.
• There are exactly 3!=6 possible output sequences.
• Different comparison sorts should generate different decision trees.
• It should be clear that, in theory, we should be able to draw a decision tree for ANY
comparison sort algorithm.
• Given a particular input sequence, the path from root to the leaf path traces a particular
key comparison sequence performed by that comparison sort.
- The length of that path represented the number of key comparisons performed by
the sorting algorithm.
• When we come to a leaf, the sorting algorithm has determined the sorted order.
• Notice that a correct sorting algorithm should be able to sort EVERY possible output
sorted order.
• Since, there are n! possible sorted order, there are n! leaves in the decision tree.
• Given a decision tree, the height of the tree represent the longest length of a root to leaf
path.
• It follows the height of the decision tree represents the largest number of key
comparisons, which is the worst-case running time of the sorting algorithm.
―An y comparison based sorting algorithm takes Ω(n logn) to sort a list of n distinct
elements in the worst-case.‖
– any comparison sort ← model by a decision tree
– worst-case running time ← the height of decision tree
―A ny comparison based sorting algorithm takes Ω(n logn) to sort a list of n distinct elements
in the worst-case.‖
• We want to find a lower bound (Ω) of the height of a binary tree that has n! Leaves.
◊ What is the minimum height of a binary tree that has n! leaves?
83
• The binary tree must be a complete tree (recall the definition of complete tree).
• Hence the minimum (lower bound) height is θ(log2(n!)).
• log2(n!)
= log2(n) + log2(n-1) + …+ log2(n/2)+….
≥ n/2 log2(n/2) = n/2 log2(n) – n/2
So, log2(n!) = Ω(n logn).
• It follows the height of a binary tree which has n! leaves is at least Ω(n logn) ◊ worst-
case running time is at least Ω(n logn)
• Putting everything together, we have
―A ny comparison based sorting algorithm takes Ω(n logn) to sort a list of n distinct
elements in the worst-case.‖
Adversary Arguments
Adversary argument: a method of proving a lower bound by playing role of adversary that makes
algorithm work the hardest by adjusting input
Example: ―Gu essing‖ a number between 1 and n with yes/no questions
Adversary: Puts the number in a larger of the two subsets generated by last question
84
Problem Types: Optimization and Decision
• Optimization problem: find a solution that maximizes or minimizes some objective
function
• Decision problem: answer yes/no to a question
Many problems have decision and optimization versions.
E.g.: traveling salesman problem
• optimization: find Hamiltonian cycle of minimum length
• decision: find Hamiltonian cycle of length ≤ m
Decision problems are more convenient for formal investigation of their complexity.
6.3 CLASS P
P: the class of decision problems that are solvable in O(p(n)) time, where p(n) is a polynomial of
problem‘s input size n
Examples:
• searching
• element uniqueness
• graph connectivity
• graph acyclicity
• primality testing (finally proved in 2002)
6.4 CLASS NP
NP (nondeterministic polynomial): class of decision problems whose proposed solutions can be
verified in polynomial time = solvable by a nondeterministic polynomial algorithm
A nondeterministic polynomial algorithm is an abstract two-stage procedure that:
• generates a random string purported to solve the problem
• checks whether this solution is correct in polynomial time
By definition, it solves the problem if it‘s capable of generating and verifying a solution on one
of its tries
Why this definition?
• led to development of the rich theory called ―c omputational complexity‖
85
What problems are in NP?
• Hamiltonian circuit existence
• Partition problem: Is it possible to partition a set of n integers into two disjoint subsets
with the same sum?
• Decision versions of TSP, knapsack problem, graph coloring, and many other
combinatorial optimization problems. (Few exceptions include: MST, shortest paths)
• All the problems in P can also be solved in this manner (but no guessing is necessary), so
we have:
P = NP
• Big question: P = NP ?
P = NP ?
• One of the most important unsolved problems is computer science is whether or not
P=NP.
– If P=NP, then a ridiculous number of problems currently believed to be very
difficult will turn out have efficient algorithms.
– If P≠NP, then those problems definitely do not have polynomial time solutions.
• Most computer scientists suspect that P ≠ NP. These suspicions are based partly on the
idea of NP-completeness.
NP -complete
problem
86
NP problems
known
NP -complete
problem
candidate
for NP -
completeness
(x1 �x2 � x3 � x4 ) � ( x5 � x6 � x7 ) � x8 � x9
is it possible to assign the input x1...x9, so that the formula evaluates to TRUE?
- If the answer is YES with a proof (i.e. an assignment of input value), then we can check the
proof in polynomial time (SAT is in NP)
- We may not be able to check the NO answer in polynomial time (Nobody really knows.)
• NP-hard
- A problem is NP-hard iff an polynomial-time algorithm for it implies a polynomial-
time algorithm for every problem in NP
- NP-hard problems are at least as hard as NP problems
• NP-complete
- A problem is NP-complete if it is NP-hard, and is an element of NP (NP-easy)
87
• Relationship between decision problems and optimization problems
88
NP-Complete Problems
• Is an NP-Problem
• Is at least as difficult as an NP problem (is reducible to it)
• More formally, a decision problem C is NP-Complete if:
– C is in NP
– Any known NP-hard (or complete) problem ≤p C
– Thus a proof must show these two being satisfied
Examples
• Longest path problem: (similar to Shortest path problem, which requires polynomial
time) suspected to require exponential time, since there is no known polynomial
algorithm.
• Hamiltonian Cycle problem: Traverses all vertices exactly once and form a cycle.
Reduction
• P1 : is an unknown problem (easy/hard ?)
• P2 : is known to be difficult
If we can easily solve P2 using P1 as a subroutine then P1 is difficult
Must create the inputs for P1 in polynomial time.
* P1 is definitely difficult because you know you cannot solve P2 in polynomial time unless you
use a component that is also difficult (it cannot be the mapping since the mapping is known to be
polynomial)
Decision Problems
Represent problem as a decision with a boolean output
– Easier to solve when comparing to other problems
– Hence all problems are converted to decision problems.
P = {all decision problems that can be solved in polynomial time}
89
NP = {all decision problems where a solution is proposed, can be verified in polynomial time}
NP-complete: the subset of NP which are the ―h ardest problems‖
Alternative Representation
• Every element p in P1 can map to an element q in P2 such that p is true (decision
problem) if and only if q is also true.
• Must find a mapping for such true elements in P1 and P2, as well as for false elements.
• Ensure that mapping can be done in polynomial time.
• *Note: P1 is unknown, P2 is difficult
Cook’s Theorem
• Stephen Cook (Turing award winner) found the first NP-Complete problem, 3SAT.
Basically a problem from Logic.
Generally described using Boolean formula.
A Boolean formula involves AND, OR, NOT operators and some variables.
Ex: (x or y) and (x or z), where x, y, z are boolean variables.
Problem Definition – Given a boolean formula of m clauses, each containing ‗n‘
boolean variables, can you assign some values to these variables so that the
formula can be true?
Boolean formula: (x v y v ẑ) Λ (x v y v ẑ)
Try all sets of solutions. Thus we have exponential set of possible solutions. So it
is a NPC problem.
• Having one definite NP-Complete problem means others can also be proven NP-
Complete, using reduction.
90
Unit 7
COPING WITH LIMITATIONS OF ALGORITHMIC POWER
7.1 Backtracking: n - Queens problem,
7.2 Hamiltonian Circuit Problem,
7.3 Subset –Sum Problem.
7.4 Branch-and-Bound: Assignment Problem,
7.5 Knapsack Problem,
7.6 Traveling Salesperson Problem.
7.7 Approximation Algorithms for NP-Hard Problems – Traveling Salesperson Problem,
Knapsack Problem
Introduction
Tackling Difficult Combinatorial Problems
• There are two principal approaches to tackling difficult combinatorial problems (NP-hard
problems):
• Use a strategy that guarantees solving the problem exactly but doesn‘t guarantee to find a
solution in polynomial time
• Use an approximation algorithm that can find an approximate (sub-optimal) solution in
polynomial time
7.1 Backtracking
• Suppose you have to make a series of decisions, among various choices, where
– You don‘t have enough information to know what to choose
– Each decision leads to a new set of choices
– Some sequence of choices (possibly more than one) may be a solution to your
problem
• Backtracking is a methodical way of trying out various sequences of decisions, until you
find one that ―works‖
91
Backtracking : A Scenario
Example:
n-Queens Problem
Place n queens on an n-by-n chess board so that no two of them are in the same row, column, or
diagonal
92
State-Space Tree of the 4-Queens Problem
7.1.1N-Queens Problem:
• The object is to place queens on a chess board in such as way as no queen can capture
another one in a single move
– Recall that a queen can move horz, vert, or diagonally an infinite distance
• This implies that no two queens can be on the same row, col, or diagonal
– We usually want to know how many different placements there are
4-Queens
• Lets take a look at the simple problem of placing queens 4 queens on a 4x4 board
• The brute-force solution is to place the first queen, then the second, third, and forth
– After all are placed we determine if they are placed legally
• There are 16 spots for the first queen, 15 for the second, etc.
– Leading to 16*15*14*13 = 43,680 different combinations
• Obviously this isn‘t a good way to solve the problem
• First lets use the fact that no two queens can be in the same col to help us
– That means we get to place a queen in each col
93
• So we can place the first queen into the first col, the second into the second, etc.
• This cuts down on the amount of work
– Now there are 4 spots for the first queen, 4 spots for the second, etc.
• 4*4*4*4 = 256 different combinations
• However, we can still do better because as we place each queen we can look at the
previous queens we have placed to make sure our new queen is not in the same row or
diagonal as a previously place queen
• Then we could use a Greedy-like strategy to select the next valid position for each col
– As you walk though the maze you have to make a series of choices
– If one of your choices leads to a dead end, you need to back up to the last choice
you made and take a different route
• That is, you need to change one of your earlier selections
– Eventually you will find your way out of the maze
94
– A tree of all the states that the problem can be in
• We start with an empty board state at the root and try to work our way down to a
leaf node
– Leaf nodes are completed boar
95
Eight Queen Problem: Implementation
• Define an 8 by 8 array of 1s and 0s to represent the chessboard
• The array is initialized to 1s, and when a queen is put in a position (c,r), board[r][c] is set
to zero
• Note that the search space is very huge: 16,772, 216 possibilities.
• Is there a way to reduce search space? Yes Search Pruning.
• We know that for queens:
each row will have exactly one queen
each column will have exactly one queen
each diagonal will have at most one queen
• This will help us to model the chessboard not as a 2-D array, but as a set of rows,
columns and diagonals.
Background
• NP-complete problem:
– Most difficult problems in NP (non- deterministic polynomial time)
• A decision problem D is NP-complete if it is complete for NP, meaning that:
– it is in NP
– it is NP-hard (every other problem in NP is reducible to it.)
• As they grow large, we are not able to solve them in a reasonable time (polynomial time)
Alternative Definition
• . NP Problem such as Hamiltonian Cycle :
– Cannot be solved in Poly-time
– Given a solution, easy to verify in poly-time
a b
c f
0
d e with 3 w/o 3
3 0
with 5 w/o 5 with 5 w/o 5
8 3 5 0
with 6 w/o 6 with 6 w/o 6 with 6 w/o 6 X
0+13<15
14 8 9 3 11 5
X with 7 w/o 7 X X X X
14+7>15 9+7>15 3+7<15 11+7>14 5+7<15
15 8
solution X
8<15
96
7.3 SUBSET –SUM PROBLEM.
• Problem: Given n positive integers w1, ... wn and a positive integer S. Find all subsets
of w1, ... wn that sum to S.
• Example:
n=3, S=6, and w1=2, w2=4, w3=6
• Solutions:
{2,4} and {6}
• We will assume a binary state space tree.
• The nodes at depth 1 are for including (yes, no) item 1, the nodes at depth 2 are for
item 2, etc.
• The left branch includes wi, and the right branch excludes wi.
• The nodes contain the sum of the weights included so far
97
solution, otherwise it is promising
• Main idea: Backtracking consists of doing a DFS of the state space tree, checking
whether each node is promising and if the node is nonpromising backtracking to the
node‘s parent
• The state space tree consisting of expanded nodes only is called the pruned state space
tree
• The following slide shows the pruned state space tree for the sum of subsets example
• There are only 15 nodes in the pruned state space tree
• The full state space tree has 31 nodes
Backtracking algorithm
void checknode (node v) {
node u
if (promising ( v ))
if (aSolutionAt( v ))
write the solution
else //expand the node
for ( each child u of v )
checknode ( u )
Checknode
• Checknode uses the functions:
– promising(v) which checks that the partial solution represented by v can lead to
the required solution
– aSolutionAt(v) which checks whether the partial solution represented by node v
solves the problem.
98
Sum of subsets – when is a node “promising”?
• Consider a node at depth i
• weightSoFar = weight of node, i.e., sum of numbers included in partial solution node
represents
• totalPossibleLeft = weight of the remaining items i+1 to n (for a node at depth i)
• A node at depth i is non-promising
if (weightSoFar + totalPossibleLeft < S )
or (weightSoFar + w[i+1] > S )
• To be able to use this ―promising function‖ the wi must be sorted in non-decreasing order
99
7.4 Branch and Bound Searching Strategies
Bounding
• A bound on a node is a guarantee that any solution obtained from expanding the node
will be:
– Greater than some number (lower bound)
– Or less than some number (upper bound)
• If we are looking for a minimal optimal, as we are in weighted graph coloring, then we
need a lower bound
– For example, if the best solution we have found so far has a cost of 12 and the
lower bound on a node is 15 then there is no point in expanding the node
• The node cannot lead to anything better than a 15
• We can compute a lower bound for weighted graph color in the following way:
– The actual cost of getting to the node
– Plus a bound on the future cost
• Min weight color * number of nodes still to color
– That is, the future cost cannot be any better than this
• Recall that we could either perform a depth-first or a breadth-first search
– Without bounding, it didn‘t matter which one we used because we had to expand
the entire tree to find the optimal solution
– Does it matter with bounding?
• Hint: think about when you can prune via bounding
• We prune (via bounding) when:
(currentBestSolutionCost <= nodeBound)
• This tells us that we get more pruning if:
– The currentBestSolution is low
– And the nodeBound is high
• So we want to find a low solution quickly and we want the highest possible lower bound
100
– One has to factor in the extra computation cost of computing higher lower bounds
vs. the expected pruning savings
101
The Assignment Problem Example
• Ballston Electronics manufactures small electrical devices.
• Products are manufactured on five different assembly lines (1,2,3,4,5).
• When manufacturing is finished, products are transported from the assembly lines to one
of the five different inspection areas (A,B,C,D,E).
• Transporting products from five assembly lines to five inspection areas requires different
times (in minutes)
Under current arrangement, assignment of inspection areas to the assembly lines are 1 to A, 2
to B, 3 to C, 4 to D, and 5 to E. This arrangement requires 10+7+12+17+19 = 65 man
minutes.
• Management would like to determine whether some other assignment of production
lines to inspection areas may result in less cost.
• This is a typical assignment problem. n = 5 And each assembly line is assigned to
each inspection area.
• It would be easy to solve such a problem when n is 5, but when n is large all possible
alternative solutions are n!, this becomes a hard problem.
• Assignment problem can be either formulated as a linear programming model, or it
can be formulated as a transportation model.
• However, An algorithm known as Hungarian Method has proven to be a quick and
efficient way to solve such problems.
Lower bound: Any solution to this problem will have total cost
at least: 2 + 3 + 1 + 4 (or 5 + 2 + 1 + 4)
102
Example: First two levels of the state-space tree
103
Example: Complete state-space tree
104
• Now we need to add bounding to this problem
– It is a minimization problem so we need to find a lower bound
• We can use:
– The current cost of getting to the node plus
– An underestimate of the future cost of going through the rest of the cities
• The obvious choice is to find the minimum weight edge in the graph and
multiply that edge weight by the number of remaining nodes to travel
through
• As an example assume we have the given adjacency matrix
• If we started at node A and have just traveled to node B then we need to compute the
bound for node B
– Cost 14 to get from A to B
– Minimum weight in matrix is 2 times 4 more legs to go to get back to node A = 8
– For a grand total of 14 + 8 = 22
0 14 4 10 20
14 0 7 8 7
4 5 0 7 16
11 7 9 0 2
18 7 17 4 0
Recall that if we can make the lower bound higher then we will get more pruning
• Note that in order to complete the tour we need to leave node B, C, D, and E
– The min edge we can take leaving B is min(14, 7, 8, 7) = 7
– Similarly, C=4, D=2, E=4
105
This implies that at best the future underestimate can be 7+4+2+4=17
• 17 + current cost of 14 = 31
– This is much higher than 8 + 14 = 22
0 14 4 10 20
14 0 7 8 7
4 5 0 7 16
11 7 9 0 2
18 7 17 4 0
106
How to find the upper bound?
• Ans: by quickly finding a feasible solution in a greedy manner: starting from the smallest
available i, scanning towards the largest i‘s until M is exceeded. The upper bound can be
calculated.
The 0/1 knapsack problem
• E.g. n = 6, M = 34
i 1 2 3 4 5 6
Pi 6 10 4 5 6 4
Wi 10 19 8 10 12 8
(Pi/Wi ≥ Pi+1/Wi+1)
• A feasible solution: X1 = 1, X2 = 1, X3 = 0, X4 = 0, X5 = 0, X6 = 0
-(P1+P2) = -16 (upper bound)
Any solution higher than -16 can not be an optimal solution.
Apply a fast (i.e., a polynomial-time) approximation algorithm to get a solution that is not
necessarily optimal but hopefully close to it
Accuracy measures:
accuracy ratio of an approximate solution sa
r(sa) = f(sa) / f(s*) for minimization problems
r(sa) = f(s*) / f(sa) for maximization problems
where f(sa) and f(s*) are values of the objective function f for the approximate solution sa and
107
actual optimal solution s*
performance ratio of the algorithm A the lowest upper bound of r(sa) on all instances
Multifragment-Heuristic Algorithm
Stage 1: Sort the edges in nondecreasing order of weights. Initialize the set of tour edges to be
constructed to empty set
Stage 2: Add next edge on the sorted list to the tour, skipping those whose addition would‘ve
created a vertex of degree 3 or a cycle of length less than n. Repeat this step until a tour of
length n is obtained
Note: RA = ∞, but this algorithm tends to produce better tours than the nearest-neighbor
algorithm
Twice-Around-the-Tree Algorithm
Stage 1: Construct a minimum spanning tree of the graph(e.g., by Prim‘s or Kruskal‘s algorithm)
Stage 2: Starting at an arbitrary vertex, create a path that goes twice around the tree and returns
to the same vertex
Stage 3: Create a tour from the circuit constructed in Stage 2 by making shortcuts to avoid
visiting intermediate vertices more than once
Note: RA = ∞ for general instances, but this algorithm tends to produce better tours than the
nearest-neighbor algorithm
Christofides Algorithm
Stage 1: Construct a minimum spanning tree of the graph
Stage 2: Add edges of a minimum-weight matching of all the odd vertices in the minimum
spanning tree
Stage 3: Find an Eulerian circuit of the multigraph obtained in Stage 2
Stage 3: Create a tour from the path constructed in Stage 2 by making shortcuts to avoid visiting
intermediate vertices more than once
108
RA = ∞ for general instances, but it tends to produce better tours than the twice-around-the-
minimum-tree alg.
Euclidean Instances
Theorem If P ≠ NP, there exists no approximation algorithm for TSP with a finite performance
ratio.
Definition An instance of TSP is called Euclidean, if its distances satisfy two conditions:
1. symmetry d[i, j] = d[j, i] for any pair of cities i and j
2. triangle inequality d[i, j] ≤ d[i, k] + d[k, j] for any cities i, j, k
Accuracy
• RA is unbounded (e.g., n = 2, C = m, w1=1, v1=2, w2=m, v2=m)
• yields exact solutions for the continuous version
109
algorithm‘s output
Accuracy
• Number of extra bins never exceeds optimal by more than 70% (i.e., RA ≤ 1.7)
• Empirical average-case behavior is much better. (In one experiment with 128,000 bins,
the relative error was found to be no more than 2%.)
Accuracy
• Number of extra bins never exceeds optimal by more than 50% (i.e., RA ≤ 1.5)
• Empirical average-case behavior is much better, too
110
UNIT-8
PRAM ALGORITHMS
8.1 Introduction,
8.2 Computational Model,
8.3 Parallel Algorithms for Prefix Computation,
8.4 List Ranking, and Graph Problems,
8.1 INTRODUCTION
In this section a few basic facts about parallel processing in general. One very basic fact that
applies to parallel computation, regardless of how it is implemented, is the following:
Suppose the fastest sequential algorithm for doing a computation with parameter n has execution
time of T(n). Then the fastest parallel algorithm with m processors (each comparable to that of
the sequential computer) has execution time ¸ T(n)/m. The idea here is: If you could find a faster
parallel algorithm, you could execute it sequentially by having a sequential computer simulate
parallelism and get a faster sequential algorithm. This would contradict the fact that the given
sequential algorithm is the fastest possible. We are making the assumption that the cost of
simulating parallel algorithms by sequential ones is negligible. This claim is called the ―P
rinciple
of Unitary Speedup‖. As usual, the parameter n represents the relative size of the instance of the
problem being considered. For instance, if the problem was that of sorting, n might be the
number of items to be sorted and T(n) would be O(n lg n) for a sorting algorithm based upon
comparisons.
As simple as this claim is, it is a bit controversial. It makes the tacit assumption that the
algorithm in question is deterministic. In other words, the algorithm is like the usual idea of a
computer program — it performs calculations and makes decisions based on the results of these
calculations.
There is an interesting area of the theory of algorithms in which statement is not necessarily true
— this is the theory of randomized algorithms. Here, a solution to a problem may involve
making random ―g
uesses‖ at some stage of the calculation. In this case, the parallel algorithm
using m processors can run faster than m× the speed of the sequential algorithm (―
Super-unitary
speedup‖). This phenomenon occurs in certain problems in which random search is used, and
111
most guesses at a solution quickly lead to a valid solution, but there are a few guesses that
execute for a long time without producing any concrete results.
The expected execution-time of a single (sequential) attempt to find a solution is the
average of all of these times, or 10.99 time-units.
112
There exist several different models of program control. Flynn listed several basic
schemes:
SIMD: Single Instruction Multiple Data. In this model the processors are controlled by a
program whose instructions are applied to all of them simultaneously (with certain
qualifications). Assume that each of the processors has a unique number that is ―
known‖ to the
processor in the sense that instructions to the parallel computer can refer to processor numbers.
An example of this type of machine is: 1Recall that the expected running-time of an algorithm
like the one in the example is the average of actual running times, weighted by probabilities that
these running times occur.
MIMD: Multiple Instruction Multiple Data. In this model processors can each have independent
programs that are read from the common RAM. This model is widely used in several settings:
The data-movement and communications problems that occur in all parallel computation are
more significant here because the instructions to the processors as well as the data must be
passed between the common memory and the processors. Due to these data-movement problems,
commercial MIMD computers tend to have a relatively small number of processors (¼ 20). In
general, it is easier to program a MIMD machine if one is only interested in a very limited form
of parallelism—namely the formation of processes. Conventional operating systems like UNIX
form separate processes to carry out many functions, and these processes really execute in
parallel on commercially-available MIMD machines. It follows that, with one of these MIMD
machines, one can reap some of the benefits of parallelism without explicitly doing any parallel
programming. For this reason, most of the parallel computers in commercial use today tend to be
MIMD machines, run as general-purpose computers. On the surface, it would appear that MIMD
machine are strictly more powerful than SIMD machines with the same number of processors.
Interestingly enough, this is not the case—it turns out that SIMD machines are more suited to
performing computations with a very regular structure. MIMD machines are not as suited to
solving such problems because their processors must be precisely synchronized to implement
certain algorithms. Pure MIMD machines have no hardware features to guarantee
synchronization of processors. In general, it is not enough to simply load multiple copies of a
program into all of the processors and to start all of these copies at the same time. In fact many
such computers have hardware features that tend to destroy synchronization, once it
113
has been achieved. For instance, the manner in which memory is accessed in the Sequent
Symmetry series, generally causes processes to run at different rates even if they are
synchronized at some time. Many Sequent‘s even have processors that run at different clock-
rates.
8.2.1 MODELS OF PARALLEL COMPUTATION
Three other terms that fill out this list are:
SISD Single Instruction, Single Data. This is nothing but conventional sequential computing.
MISD This case is often compared to computation that uses Systolic Arrays. These are arrays of
processors that are developed to solve specific problems — usually on a single VLSI chip. A
clock coordinates the data movement operations of all of the processors, and output from some
processors are pipelined into other processors. The term ―
Systolic‖ comes from an analogy with
an animal‘s circulatory system — the data in the systolic array playing the part of the blood in
the circulatory system. In a manner of speaking, one can think of the different processors in a
systolic array as constituting ―
multiple processors‖ that work on one set of
(pipelined) data.
SIMD-MIMD Hybrids This is a new category of parallel computer that is becoming very
significant. These machines are also called SAMD machines (Synchronous-Asynchronous
Multiple Data). The first announced commercial SAMD computer is the new Connection
Machine, the CM-5.
This is essentially a MIMD computer with hardware features to allow:
• Precise synchronization of processes to be easily achieved.
• Synchronization of processors to be maintained with little or no overhead,
Once it has been achieved (assuming that the processors are all executing the same
instructions in corresponding program steps). It differs from pure MIMD machines in that the
hardware maintains a uniform ―
heartbeat‖ throughout the machine, so that when the same
program is run on all processors, and all copies of this program are started at the same time, it is
possible to the execution of all copies to be kept in lock-step with essentially no overhead. Such
computers allow efficient execution of MIMD and SIMD programs.
114
8.3 PARALLEL ALGORITHMS FOR PREFIX COMPUTATION
Theorem: An algorithm that runs in T time on the p-processor priority CRCW PRAM can be
simulated by EREW PRAM to run in O(T log
p) time. A concurrent read or write of an p-processor CRCW PRAM can be implemented on a p
processor EREW PRAM to execute in O(log p) time
Q1,…,Qp CRCW processors, such that Qi has to read (write) M[ji]
P1,…,Pp EREW processors
M1,…,Mp denote shared memory locations for special use Pi stores <ji,i> in Mi
Sort pairs in lexicographically non-decreasing order in O(log p) time using EREW merge sort
algorithm
Pick representative from each block of pairs that have same firs component in O(1) time
Representative Pi reads (writes) from M[k] with <k,_> in Mi and copies data to each M in the
block in O(log p) time using EREW segmented parallel prefix algorithm
Pi reads data from Mi
For n x n matrix M with non-negative integer coefficients, define M and give an algorithm for
computing m. prove that M cnan be computed from n x n matrix M in O(log n)time using
CRCW PRAM processors for any fixed € > 0
115
if i < n/2h then
C‟[i,j,l] := C‟[i,j,2l–1] + C‟[i,j,2l]
if l = 1 then
C[i,j] := C‟[i,j,1]
End
Divide-and-conquer:
1. Split S into S1 S2 and recursively compute the UCH of S1 and S2
2. Combine UCH(S1) and UCH(S2) by computing the upper common tangent in O(1)
time to form UCH(S) Repeat to compute the LCH
Parallel time (assuming p = O(n) processors)
T(n) = T(n/2) + O(1) gives
T(n) = O(log n)
116
Some Important Questions
Unit 1
1. Give an algorithm for selection sort. If C(n) denotes the no of times the algorithm
1. is executed (n denotes input size),obtain an expression for C(n)
The algorithm’s basic operation is the key comparison A[j]<A[min]. The number of
Thus, selection sort is a O(n2) algorithm on all inputs. The number of key swaps is
only O(n) or, more precisely, n-1 (one for each repetition of the i loop).This property
distinguishes selection sort positively from many other sorting algorithms.
2. With the help of a flowchart, explain the various stages of algorithm and design
process
4. Finiteness. If we trace out the instruction of an algorithm, then for all cases, the
117
operation be definite as in criterion 3; it also must be feasible.
if n = = 1
return 1
el s e
return BinRec (└ n/2 ┘) + 1
Analysis:
1. Input size: given number = n
118
A (2k) = A (2k-1) + 1 for k > 0; A (20) = 0
= [A (2k-2) + 1] + 1
= A (2k-2) + 2
= [A (2k-3) + 1] + 2
= A (2k-3) + 3
= A (2k-i) + i
When i = k, we have
= A (2k-k) + k = A (20) + k
Since A (20) = 0
A (2k) = k
A (n) = log2 n
A (n) � Θ ( log n)
4. Explain the worst-case, best-case and average case efficiencies of an algorithm
with help of an example.
Worst-case, Best-case, Average case efficiencies
Algorithm efficiency depends on the input size n. And for some algorithms
efficiency depends on type of input. We have best, worst & average case efficiencies.
119
1. Decide on parameter n indicating input size
3. Check whether the number of times the basic operation is executed depends only
on the input size n. If it also depends on the type of input, investigate worst, average,
and best case efficiency separately.
4. Set up recurrence relation, with an appropriate initial condition, for the number
of times the algorithm’s basic operation is executed.
//Computes n! recursively
if n = = 0
return 1
el s e
return Factorial (n – 1) * n
Analysis:
1. Input size: given number = n
1: t o multiply Factorial (n – 1) by n
M (n) = M (n – 1) + 1
= [ M (n – 2) + 1 ] + 1
= M (n – 2) + 2
= [ M (n – 3) + 1 ] + 3
120
= M (n – 3) + 3
= M (n – i ) + i
When i = n, we have
= M (n – n ) + n = M (0 ) + n
Since M (0) = 0
= n
M (n) � Θ (n)
Euclid’s algorithm
gcd(m,n)=gcd(n,m mod n)
Euclid’s algorithm
Step1:if n=0 return val of m & stop else proceed step 2
Euclid’s algorithm
Step 3: Divide n by t.if the remainder is 0, return the value of t as the answer and Stop,
otherwise proceed to step4
121
7. Explain all asymptotic notations used in algorithm analysis
Asymptotic Notations
Asymptotic notation is a way of comparing functions that ignores constant factors
and small input sizes. Three notations used to compare orders of growth of an
algorithm’s basic operation count are: O, Ω, Θ notations
Definition:
A function t(n) is said to be in O(g(n)), denoted t(n) �O(g(n)), if t(n) is bounded above
by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant
c and some nonnegative integer n0 such that
Definition:
A function t (n) is said to be in Ω (g(n)), denoted t(n) � Ω (g (n)), if t (n) is bounded
below by some constant multiple of g (n) for all large n, i.e., if there exist some positive
constant c and some nonnegative integer n0 such that
Definition:
122
A function t (n) is said to be in Θ(g (n)), denoted t(n) � Θ (g (n)), if t (n) is bounded both
above and below by some constant multiple of g (n) for all large n, i.e., if there exist
some positive constant c1 and c2 and some nonnegative integer n0 such that
123
Unit 2
1. What is brute-force method? Write a brute-force string matching algorithm.
Analyze its complexity.
Brute force is a straightforward approach to problem solving, usually directly based
on the
problem’s statement and definitions of the concepts involved. Though rarely a source of
clever or efficient algorithms, the brute-force approach should not be overlooked as an
important algorithm design strategy. Unlike some of the other strategies, brute force is
applicable to a very wide variety of problems. For some important problems (e.g., sorting,
searching, string matching), the brute-force approach yields reasonable algorithms of at
least some practical value with no limitation on instance size Even if too inefficient in
general, a brute-force algorithm can still be useful for solving small-size instances of a
problem. A brute-force algorithm can serve an important theoretical or educational
purpose.
Given a string of n characters called the text and a string of m characters (m = n) called the
pattern, find a substring of the text that matches the pattern. To put it more precisely, we
want to find i—the index of the leftmost character of the first matching
substring in the text—such that
p0 . . . pj . . . pm-1 pattern P
1. Pattern: 001011
Text: 10010101101001100101111010
2. Pattern: happy
124
The algorithm shifts the pattern almost always after a single character comparison. in the
worst case, the algorithm may have to make all m comparisons before shifting the pattern,
and this can happen for each of the n - m + 1 tries. Thus, in the worst case, the algorithm is
in θ(nm).
2. Write the quick sort algorithm. Analyze its efficiency. Apply the algorithm to sort
the list : 5, 3, 1, 9, 8, 2, 4, 7, .
ALGORITHM Quicksort (A[ l …r ])
125
3. Write the algorithm for binary search and find the average case efficiency
b=2
f(n) = n0 ; d = 0
case 2 holds:
C(n) = Θ (ndlog n)
= Θ (n0log n)
= Θ ( log n)
4. Discuss the merge sort algorithm with recursive tree and its efficiency. Apply the
same algorithm to sort the list {4,6,1,3,9,5,2,7}.
126
ALGORITHM Mergesort ( A[0… n-1] )
//sorts array A by recursive mergesort
//i/p: array A
//o/p: sorted array A in ascending order
if n > 1
127
5. Using bubble sort algorithm. Arrange the letters of the word a” QUESTION” in
alphabetical order
Bubble Sort
Compare adjacent elements of the list and exchange them if they are out of
order.Then we repeat the process,By doing it repeatedly, we end up ‘bubbling up’ the
largest element to the last position on the list
A L G O R IT H M
for i=0 to n - 2 do
for j= 0 to n - 2 - i do
if A[j + 1]<A[j ]
128
The first 2 passes of bubble sort on the list 89, 45, 68, 90, 29, 34, 17. A new line is shown
after a swap of two elements is done. The elements to the right of the vertical bar are in
their final positions and are not considered in subsequent iterations of the algorithm
6. Give the general divide and conquer recurrence and explain the same. Give the
master’s theorem .
General divide & conquer recurrence:
An instance of size n can be divided into b instances of size n/b, with “a”
of them needing
to be solved. [ a ≥ 1, b > 1].
Assume size n is a power of b. The recurrence for the running time T(n) is as
follows:
T(n) = aT(n/b) + f(n)
where:
f(n) – a function that accounts for the time spent on dividing the
problem into
smaller ones and on combining their solutions
Therefore, the order of growth of T(n) depends on the values of the constants
a & b and
the order of growth of the function f(n).
Master theorem
Theorem: If f(n) Є Θ (nd) with d ≥ 0 in recurrence equation
T(n) = aT(n/b) + f(n),
Θ (nd) if a < bd
T(n) = Θ (ndlog n) if a = bd
Θ (n l o g b a ) i f a > bd
Example:
Let T(n) = 2T(n/2) + 1, solve using master theorem.
Solution:
Here: a = 2
b=2
f(n) = Θ(1)
129
d=0
Therefore:
a > bd i.e., 2 > 20
7. Explain the concept of divide and conquer methodology indicating three major
variations.(Jan 2015).
Divide & conquer is a general algorithm design strategy with a general plan as
follows:
1. DIVIDE:
A problem’s instance is divided into several smaller instances of
the same problem, ideally of about the same size.
2. RECUR:
Solve the sub-problem recursively.
3. CONQUER:
If necessary, the solutions obtained for the smaller instances are
combined to get a solution to the original instance.
The base case for the recursion is sub-problem of constant size.
Advantages of Divide & Conquer technique:
• For solving conceptually difficult problems like Tower Of Hanoi,
divide & conquer is a powerful tool
• Results in efficient algorithms
• Divide & Conquer algorithms are adapted foe execution in
multi-processor machines
• Results in algorithms that use memory cache efficiently.
130
Unit-3
1. Justify the statement “prim’s algorithm always yields minimum cost spanning
tree”. Give the prim’s algorithm and discuss about its time complexity.
131
2. Give the dijkstra’s algorithm. What is its complexity? Discuss with a simple
example Jan 2015/ Jan 2014/ June 13)
ALGORITHM Dijkstra(G, s)
//Input: Weighted connected graph G and source vertex s
132
//Output: The length Dv of a shortest path from s to v and its
penultimate vertex Pv
for every vertex v in V
//initialize vertex priority in the priority queue
Initialize (Q)
`for every vertex v in V do
Dv→∞ ; Pv→null // Pv , the parent of v
insert(Q, v, Dv) //initialize vertex priority in priority queue
ds→0
//update priority of s with ds, making ds, the minimum
Decrease(Q, s, ds)
VT→�
u*→DeleteMin(Q)
//expanding the tree, choosing the locally best vertex
VT→VT U {u*}
for every vertex u in V – VT that is adjacent to u* do
if Du* + w (u*, u) < Du
Du→Du + w (u*, u); Pu u*
Decrease(Q, u, Du)
Conclusion:
• Doesn’t work with negative weights
• Applicable to both undirected and directed graphs
• Use unordered array to store the priority queue: Efficiency = Θ(n2)
• Use min-heap to store the priority queue: Efficiency = O(m log n)
3. Use the kruskal’s method to solve the min cost spanning tree to the above graph.
The method:
STEP 1: Sort the edges by increasing weight
STEP 2: Start with a forest having |V| number of trees.
STEP 3: Number of trees are reduced by ONE at every inclusion of an
edge At each
stage:
• Among the edges which are not yet included, select the one
with minimum
weight AND which does not form a cycle.
• the edge will reduce the number of trees by one by combining
two trees
of the forest
Algorithm stops when |V| -1 edges are included in the MST i.e :
when the
number of trees in the forest is reduced to ONE.
133
134
4. Solve the following instances of the single source shortest path problem with
vertex ’a‘ as the source.
135
Solution:
Length Dv of shortest path from source (s) to other vertices v and Penultimate
vertex Pv
for every vertex v in V:
Da = 0 , Pa = null
Db = ∞ , Pb = null
Dc = ∞ , Pc = null
Dd = ∞ , Pd = null
De = ∞ , Pe = null
Df = ∞ , Pf = null
136
5. Solve the following instances of the single source shortest path problem with
vertex ’a‘ as the source.
The above weighted graph has 5 vertices from A-E. The value
between the two
vertices is known as the edge cost between two vertices. For example
the edge
cost between A and C is 1. Using the above graph the Dij kstra’s
algorithm is
used to determine the shortest path from the source A to the
remaining
vertices in the graph.
· Initial step
sDist[A]=0 ; the value to the source itself
sDist[B]= ∞, sDist[C]= ∞, sDist[D]= ∞, sDist[E]= ∞; the
nodes not
processed yet
· Step 1
Adj[A]={B,C}; computing the value of the adjacent vertices
of the graph
sDist[B]=4;
137
sDist[C]=2;
· Step 2
Computation from vertex C
Adj[C] = {B, D};
sDist[B] > sDist[C]+ EdgeCost[C,B]
4 > 1+2 (True)
Therefore, sDist[B]=3;
sDist[D]=2;
· Step 4
Adj[E]=0; means there is no outgoing edges from E
And no more vertices, algorithm terminated. Hence the path which
follows the algorithm is
138
Figure: the path obtained using Dijkstra’s Algorithm
6. Using kruskal’s algorithm, obtain the min cost spanning tree for the graph given
below.
Kruskal’s Algorithm
Input: A weighted, connected and undirected graph G = (V, E).
Output: A minimum spanning tree T for G.
Method .
1. T = φ;
2. While T contains less than n-1 edges do // n = |V|
Choose an edge (u, v) from E of the smallest weight;
If (the adding of (u, v) to T does not form a cycle in T) then
Add
(u, v) to T;
Delete edge (u, v) from E;
3. Output T.
(c) By the disjoint union and find operations:
A tree in the forest is used to represent a SET. Two operations on
disjoint sets are as follows:
union(S1, S2): union S1 with S2 to form a new set S1.
find(u): return a set name containing element u.
Either of the above operations needs O(1) time.
To check whether or not one cycle exists if edge (u, v) is added into
the current forest, we perform the following operations:
If (u, v) � E and u, v are in the same set, then the addition of (u, v)
139
will form a cycle. (perform find(u) and find(v) to get the sets
containing u and v, respectively)
If (u, v) � E and u � S1, v � S2, then perform union(S1, S2), where S1
and S2 are distinct sets.
For example, for a spanning forest in the following, we would like to
check whether one cycle exists if we add edge (3, 4) we must perform
the following operations:
7. Using prim’s algorithm , obtain the min cost spanning tree for the graph given
below.( July 2014 / June 2013)
140
8. Write an algorithm for kruskal method and find its time complexity( jan 2015/jan 2013)
Algorithm:
ALGORITHM Kruskal (G)
//Kruskal’s algorithm for constructing a MST
//Input: A weighted connected graph G = { V, E }
//Output: ET the set of edges composing a MST of G Sort E in ascending
order of the edge weights
// initialize the set of tree edges and its size
E T→ Ø
edge_counter →0
//initialize the number of processed edges
K →0
while edge_counter < |V| - 1
141
k→k + 1
if ET U { ei k} is acyclic
ET →ET U { ei k }
edge_counter →edge_counter + 1
return ET
Efficiency:
Efficiency of Kruskal’s algorithm is based on the time needed
for sorting the
edge weights of a given graph.
• With an efficient sorting algorithm: Efficiency: Θ(|E| log |E| )
142
Unit 4
1. What are memory functions? Explain how they are used to solve the knapsack
problem. Solve the instance of the knapsack problem below. Capacity W= 5
2. State all-pairs shortest path algorithm. Solve the given problem using the below
figure
143
3. Using Warshall’s algorithm, obtain the transitive closure of the matrix given
below
144
4. State all-pairs shortest path algorithm. Solve the given problem using the matrix
given below
145
5. Using the Dynamic programming .solve the following Knapsack instance.
146
6. Write the algorithm to find the shortest path using Floyd’s approach
Algorithm:
Algorithm Floyd(W[1..n, 1..n])
// Implements Floyd’s algorithm
// Input: Weight matrix W
// Output: Distance matrix of shortest paths’ length
D →W
147
for k→1 to n do
for i→ 1 to n do
for j→ 1 to n do
D [ i, j]→ min { D [ i, j], D [ i, k] + D [ k, j]
return D
7. Explain dynamic programming. list out the differences between divide and conquer
and dynamic programming.
Dynamic Programming Properties
148
Unit 5
1. Differentiate between DFS and BFS tree traversals. Explain how DFS algorithm
can be used to obtain the topological sorting with an example
Topological Sorting
Description:
DFS Method:
• Perform DFS traversal and note the order in which vertices become
dead ends
(popped order)
• Reverse the order, yield the topological sorting.
149
150
2. Write an algorithm for topological sort of a diagraph using DFS algorithm.
Prove the correctness of the algorithm.
ALGORITHM DFS (G)
//implements DFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: DFS tree
Mark each vertex in V with 0 as a mark of being “unvisited”
Count →0
for each vertex v in V do
if v is marked with 0
dfs(v)
dfs(v)
count→ count + 1
mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
3. Show how insertion sort arranges the following members in
increasing order. 89, 45, 68, 90, 29, 34, 17
151
Algorithm:
ALGORITHM Insertionsort(A [0 … n-1] )
//sorts a given array by insertion sort
//i/p: Array A[0…n-1]
//o/p: sorted array A[0…n-1] in ascending order
for i→ 1 to n-1
V→ A[i]
j→ i-1
while j ≥ 0 AND A[j] > V do
A[j+1] A[j]
j→ j – 1
A[j + 1]→ V
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists
Best case: when input is a sorted array in ascending order:
Worst case: when input is a sorted array in descending order:
• Let Cworst(n) be the number of key comparison in the worst case. Then
Example:
Sort the following list of elements using insertion sort:
89, 45, 68, 90, 29, 34, 17
89 45 68 90 29 34 17
45 89 68 90 29 34 17
45 68 89 90 29 34 17
45 68 89 90 29 34 17
29 45 68 89 90 34 17
29 34 45 68 89 90 17
17 29 34 45 68 89 90
4. What are the three major variations of decrease and conquer technique explain
each with an example.(june 2013).
Decrease & conquer is a general algorithm design strategy based on
exploiting the relationship between a solution to a given instance of a
problem and a solution to a smaller instance of the same problem.
The exploitation can be either top-down
152
(recursive) or bottom-up (non-recursive).
The major variations of decrease and conquer are
1. Decrease by a constant :(usually by 1):
a. insertion sort
b. graph traversal algorithms (DFS and BFS)
c. topological sorting
d. algorithms for generating permutations, subsets
153
Decrease by a constant factor (usually by half)
5. What do you mean by topological sort? Give its applications. Explain source
removal method to find topological sorting.
Source removal method:
• Purely based on decrease & conquer
• Repeatedly identify in a remaining digraph a source, which is a
vertex with no
incoming edges
• Delete it along with all the edges outgoing from it.
154
155
UNIT 6
1 . Draw the decision tree for the 3-elements insertion sort
Decision Trees
Decision tree — a convenient model of algorithms involving
Comparisons in which: (i)internal nodes represent comparisons (ii)
leaves represent outcomes Decision tree for 3-element insertion sort
2 . Solve the instance of knapsack problem using branch and bound algorithm
1 2 3 4
Weight 4 7 5 3
156
How to find the upper bound?
• Ans: by quickly finding a feasible solution in a greedy manner: starting
from the smallest available i, scanning towards the largest i’s until M is
exceeded. The upper bound can becalculated.
The 0/1 knapsack problem
• E.g. n = 6, M = 34
1 2 3 4 5 6
Pi 6 10 4 5 6 4
Wi 10 19 8 10 12 8
(Pi/Wi ≥ Pi+1/Wi+1)
• A feasible solution: X1 = 1, X2 = 1, X3 = 0, X4 = 0, X5 = 0, X6 = 0
-(P1+P2) = -16 (upper bound)
Any solution higher than -16 can not be an optimal solution.
i= 1
so l u t i o n for 0/1 knapsack
n
problem and - ∑ P i X �i be an
i= 1
optimal solutio n for fractional
knapsack problem. Le t
n n
∑ , Y’ = - P i X �i .
Y= - P X i i ∑
i= 1
� Y’ ≤ Y
3. Define the following i) tractable problem ii) class p iii) class NP iv)
polynomial reduction v) NP complete problems.
ii) Class P
P: the class of decision problems that are solvable in O(p(n)) time, where p(n) is a
polynomial of problem’s input size n
Examples:
• searching
157
• Element uniqueness
• graph connectivity
• graph acyclicity
• Primality testing (finally proved in 2002)
iii)Class NP
NP (nondeterministic polynomial): class of decision problems whose proposed
solutions can be
verified in polynomial time = solvable by a nondeterministic polynomial
algorithm
A nondeterministic polynomial algorithm is an abstract two-stage procedure that:
• generates a random string purported to solve the problem
• checks whether this solution is correct in polynomial time
By definition, it solves the problem if it’s capable of generating and verifying a
solution on one
of its tries
Why this definition?
• led to development of the rich theory called “computational complexity”
P = NP ?
• One of the most important unsolved problems is computer science is
158
whether or not
P=NP.
– If P=NP, then a ridiculous number of problems currently
believed to be very
difficult will turn out have efficient algorithms.
– If P≠NP, then those problems definitely do not have polynomial
time solutions.
• Most computer scientists suspect that P ≠ NP. These suspicions are
based partly on the idea of NP-completeness.
iv)Polynomial (P) Reuction Problems
• Are solvable in polynomial time
• Are solvable in O(nk), where k is some constant.
• Most of the algorithms we have covered are in P
Nondeterministic Polynomial (NP) Problems
• This class of problems has solutions that are verifiable in polynomial time.
– Thus any problem in P is also NP, since we would be able
to solve it in
polynomial time, we can also verify it in polynomial time
V) NP-Complete Problems
A decision problem D is NP-complete if it’s as hard as any problem in NP, i.e.,
• D is in NP
• every problem in NP is polynomial-time reducible to D
NP problems
NP -co mp let e
pro blem
known
NP -complete
problem
candidate
for NP -
completeness
159
Examples: TSP, knapsack, partition, graph-coloring and hundreds of other
problems of combinatorial nature
General Definitions: P, NP, NP-hard, NP-easy, and NP-complete
• Problems
- Decision problems (yes/no)
- Optimization problems (solution with best score)
• P
- Decision problems (decision problems) that can be solved in
polynomial time
- can be solved “efficiently”
• NP
- Decision problems whose “YES” answer can be verified
in
polynomial
time, if we already have the proof (or witness)
• co-NP
- Decision problems whose “NO” answer can be verified in
polynomial time,
if we already have the proof (or witness)
• e.g. The satisfiability problem (SAT)
- Given a boolean formula
(x1 � x2 � x3 � x4 ) � (x5 � x6 � x7 ) � x8 � x9
160
•Requirement for Reduction
- Polynomial time
- YES to A also implies YES to SAT, while
NO to A also implies No to SAT
NP-Complete Problems
• Is an NP-Problem
• Is at least as difficult as an NP problem (is reducible to it)
• More formally, a decision problem C is NP-Complete if,
– C is in NP
– Any known NP-hard (or complete) problem ≤p C
– Thus a proof must show these two being satisfied
Decision Problems
Represent problem as a decision with a boolean output
– Easier to solve when comparing to other problems
– Hence all problems are converted to decision problems.
P = {all decision problems that can be solved in polynomial time}
NP = {all decision problems where a solution is proposed, can be verified in
polynomial time}NP-complete: the subset of NP which are the “hardest problems
Decision Trees:
161
UNIT-7
1. Explain how backtracking is used for solving 4- queens problem.
Show the state space tree.
N-Queens Problem:
• The object is to place queens on a chess board in such as way as no
queen can capture another one in a single move
Recall that a queen can move horz, vert, or diagonally an infinite distance.
This implies that no two queens can be on the same row, col, or diagonal
We usually want to know how many different placements there are
4-Queens
• Lets take a look at the simple problem of placing queens 4 queens on a 4x4
board
• The brute-force solution is to place the first queen, then the second, third,
and forth
– After all are placed we determine if they are placed legally
• There are 16 spots for the first queen, 15 for the second, etc.
– Leading to 16*15*14*13 = 43,680 different combinations
• Obviously this isn’t a good way to solve the problem
• First lets use the fact that no two queens can be in the same col to help us
– That means we get to place a queen in each col
• So we can place the first queen into the first col, the second into the
second,
• This cuts down on the amount of work
– Now there are 4 spots for the first queen, 4 spots for the second,
et c.
162
• So now what do we do?
• Well, this is very much like solving a maze
– As you walk though the maze you have to make a series of choices
– If one of your choices leads to a dead end, you need to back up to the
last choice you made and take a different route .
163
2. Differentiate between back tracking and branch- and –bound algorithms
Backtracking : A Scenario
164
exhaustive search in the average case.
• 2 mechanisms:
A mechanism to generate branches when searching the solution space
A mechanism to generate a bound so that many braches can be terminated
• It is efficient in the average case because many branches can be
terminatedvery early.
• Although it is usually very efficient, a very large tree may be generated in
the worst case.
• Many NP-hard problem can be solved by B&B efficiently in the average
case; however,
the worst case time complexity is still exponential.
Bounding
A bound on a node is a guarantee that any solution obtained from expanding
the node will be:
– Greater than some number (lower bound)
– Or less than some number (upper bound)
If we are looking for a minimal optimal, as we are in weighted graph
coloring, then we need a lower bound
For example, if the best solution we have found so far has a cost of 12 and
the lower bound on a node is 15 then there is no point in ex pa n d i n g the node
The node cannot lead to anything better than a 15
• We can compute a lower bound for weighted graph color in the following
way:
– The actual cost of getting to the node
– Plus a bound on the future cost
• Min weight color * number of nodes still to color
That is, the future cost cannot be any better than this
• Recall that we could either perform a depth-first or a breadth-first search
– Without bounding, it didn’t matter which one we used because we
had to expand the entire tree to find the optimal solution
– Does it matter with bounding?
• Hint: think about when you can prune via bounding
• We prune (via bounding) when:
(currentBestSolutionCost <= nodeBound)
• This tells us that we get more pruning if:
– The currentBestSolution is low
– And the nodeBound is high
• So we want to find a low solution quickly and we want the highest
possible lower bound
– One has to factor in the extra computation cost of computing
higher lower bounds vs. the expected pruning savings
165
4. Explain approximation algorithm for NP- hard problem in general. Also discuss
the approximation algorithms for knapsack problem
Approximation Approach
Apply a fast (i.e., a polynomial-time) approximation algorithm to get a
solution that is not
necessarily optimal but hopefully close to it
Accuracy measures:
accuracy ratio of an approximate solution sa
r(sa) = f(sa) / f(s*) for minimization problems
r(sa) = f(s*) / f(sa) for maximization problems
where f(sa) and f(s*) are values of the objective function f for the
approximate solution sa and
actual optimal solution s*
performance ratio of the algorithm A the lowest upper bound of r(sa) on all i
Lower bound: Any solution to this problem will have total cost
at least: 2 + 3 + 1 + 4 (or 5 + 2 + 1 + 4)
166
Example: Complete state-space tree
167
UNIT 8
1. What is prefix computation problem? Give the algorithm for prefix
computation which uses i)n processors. ii)n/log n processors. Obtain the time complexities
of these algorithms.
PRAM Recursive Prefix Sum Algorithm
Input: Array of (x1, x2, …, xn) elements, n = 2k
Output: Prefix sums si, 1 < i < n
begin
if n = 1 then s1 = x1; exit
for 1 < i < n/2 pardo
yi := x2i–1 + x2i
Recursively compute prefix sums of y and store in z
for 1 < i < n pardo
if i is even then si := zi/2
if i > 1 is odd then si := z(i–1)/2 + xi
if i = 1 then s1 := x1
Theorem: An algorithm that runs in T time on the p-processor priority CRCW PRAM can
be simulated by EREW PRAM to run in O(T log
p) time. A concurrent read or write of an p-processor CRCW PRAM can be implemented on
a p processor EREW PRAM to execute in O(log p) time
Q1,…, Qp CRCW processors, such that Qi has to read (write) M[ji]
P1,…,Pp EREW processors
M1,…,Mp denote shared memory locations for special use Pi stores <ji,i> in Mi
Sort pairs in lexicographically non-decreasing order in O(log p) time using EREW merge sort
algorithm
Pick representative from each block of pairs that have same firs component in O(1) time
Representative Pi reads (writes) from M[k] with <k,_> in Mi and copies data to each M in the
block in O(log p) time using EREW segmented parallel prefix algorithm
Pi reads data from Mi
168
C’[i,j,l] := C’[i,j,2l–1] + C’[i,j,2l]
if l = 1 then
C[i,j] := C’[i,j,1]
End
169