0% found this document useful (0 votes)
7 views21 pages

Ada Mod 2

The document discusses brute-force approaches to combinatorial problems, focusing on exhaustive search methods for the traveling salesman problem and the knapsack problem, both of which are NP-hard. It also introduces the decrease-and-conquer strategy, exemplified by algorithms like insertion sort and topological sorting, before explaining the divide-and-conquer methodology, particularly through merge sort and quicksort. Overall, it emphasizes the inefficiency of exhaustive search for large inputs and the importance of alternative algorithmic strategies.

Uploaded by

iambaiterr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views21 pages

Ada Mod 2

The document discusses brute-force approaches to combinatorial problems, focusing on exhaustive search methods for the traveling salesman problem and the knapsack problem, both of which are NP-hard. It also introduces the decrease-and-conquer strategy, exemplified by algorithms like insertion sort and topological sorting, before explaining the divide-and-conquer methodology, particularly through merge sort and quicksort. Overall, it emphasizes the inefficiency of exhaustive search for large inputs and the importance of alternative algorithmic strategies.

Uploaded by

iambaiterr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Module-2

Brute force Approaches (contd..)


Exhaustive search:
 Exhaustive search is simply a brute-force approach to combinatorial problems. It
suggests generating each and every element of the problem domain, selecting those of
them that satisfy all the constraints, and then finding a desired element.
 We illustrate exhaustive search by applying it to three important problems: the traveling
salesman problem, the knapsack problem, and the assignment problem.

Traveling salesman problem:


The traveling salesman problem (TSP) is one of the combinatorial problems. The problem asks
to find the shortest tour through a given set of n cities that visits each city exactly once before
returning to the city where it started.
The problem can be conveniently modeled by a weighted graph, with the graph’s vertices
representing the cities and the edge weights specifying the distances.
Then the problem can be stated as the problem of finding the shortest Hamiltonian circuit of the
graph. (A Hamiltonian circuit is defined as a cycle that passes through all the vertices of the
graph exactly once).
A Hamiltonian circuit can also be defined as a sequence of n + 1 adjacent vertices vi0, vi1, . . . ,
vin−1, vi0, where the first vertex of the sequence is the same as the last one and all the other n −
1 vertices are distinct. All circuits start and end at one particular vertex. Figure presents a small
instance of the problem and its solution by this method.

Time efficiency
• Wecangetallthetoursbygeneratingallthepermutationsofn−1intermediatecities from a particular
city. i.e. (n - 1)!
• Consider two intermediate vertices, say and c, and then only permutations in which b precedes
c. (This trick implicitly defines a tour’s direction.)
An inspection of Figure reveals three pairs of tours that differ only by their direction. Hence, we
could cut the number of vertex permutations by half because cycle total lengths in both
directions are same. The total number of permutations needed is still ½[ (n − 1)]!, which makes
the exhaustive- search approach impractical for large n. It is useful for very small values of n.

Knapsack problem:
Given n items of known weights w1, w2, . . ., wn and values v1, v2, . . . , vn and a knapsack of
capacity W, find the most valuable subset of the items that fit into the knapsack.
Real time examples:
• A Thief who wants to steal the most valuable loot that fits into his knapsack,
• A transport plane that has to deliver the most valuable set of items to a remote location
without exceeding the plane’s capacity.
The exhaustive-search approach to this problem leads to generating all the subsets of the set of n
items given, computing the total weight of each subset in order to identify feasible subsets (i.e.,
the ones with the total weight not exceeding the knapsack capacity), and finding a subset of the
largest value among them.
Example:1) Fig.a
Time efficiency: As given in the example, the solution to the instance of Figure a is given in
Figure (b)Since the number of subsets of an n-element set is 2n , the exhaustive search leads to a
Ω(2n ) algorithm, no matter how efficiently individual subsets are generated.
Note: Exhaustive search of both the traveling salesman and knapsack problems leads to
extremely inefficient algorithms on every input. In fact, these two problems are the best known
examples of NP-hard problems. No polynomial-time algorithm is known for any NP-hard
problem. Moreover, most computer scientists believe that such algorithms do not exist. Some
sophisticated approaches like backtracking and branch-and-bound enable us to solve some
instances but not all instances of these in less than exponential time. Alternatively, we can use
one of many approximation algorithms.

2) Given n items: • weights: w1 w2 … wn


• values: v1 v2 … vn
• a knapsack of capacity W Find most valuable subset of the items that fit into the knapsack
Example: Knapsack capacity W=16
item weight value
6 2 $20
7 5 $30
8 10 $50
9 5 $10
Subset Total weight Total value
{1} 2 $20
{2} 5 $30
{3} 10 $50
{4} 5 $10
{1,2} 7 $50
{1,3} 12 $70
{1,4} 7 $30
{2,3} 15 $80
{2,4} 10 $40
{3,4} 15 $60
{1,2,3} 17 not feasible
{1,2,4} 12 $60
{1,3,4} 17 not feasible
{2,3,4} 20 not feasible
{1,2,3,4} 22 not feasible

DECREASE AND CONQUER


Decrease & conquer is a general algorithm design strategy based on exploiting the relationship
between a solution to a given instance of a problem and a solution to a smaller instance of the
same problem. The exploitation can be either top-down (recursive) or bottom-up (non-
recursive).
The major variations of decrease and conquer are
 decrease by a constant
Ex : Insertion sort, Graph traversal algorithms (DFS and BFS), Topological sorting, Algorithms for
generating permutations, subsets
 decrease by a constant factor
Ex: Binary search
 variable size decrease
Ex: Euclid‘s algorithm
Decrease by a constant: (usually by 1):
In the decrease-by-a-constant variation, the size of an instance is reduced by the same
constant on each iteration of the algorithm. Typically, this constant is equal to one (as
shown in the figure), although other constant size reductions do happen occasionally.

Consider, as an example, the exponentiation problem of computing a for positive integer


exponents. The relationship between a solution to an instance of size n and an instance of
size n-1 is obtained by the obvious formula an=an-1.a
So the function f(n)=an can be computed either―top down by using its recursive
definition

Decrease by a constant factor :(usually by 2):


The decrease-by-a-constant-factor technique suggests reducing a problem instance by the
same constant factor on each iteration of the algorithm. In most applications, this
constant factor is equal to two. The decrease-by-half idea is illustrated in the following
figure.
For an example, let us revisit the exponentiation problem. If the instance of size n is to
compute an, the instance of half its size is to compute an/2, with the obvious
relationship between the two: an = (an/2) 2.
But since we consider here instances with integer exponents only, the former does not
work for odd n. If n is odd, we have to compute an-1 by using the rule for even- valued
exponents and then multiply the result by a. To summarize, we have the following
formula:

Variable size decrease:


Finally, in the variable-size-decrease variety of decrease-and-conquer, the size-
reduction pattern varies from one iteration of an algorithm to another. Euclid‘s
algorithm for computing the greatest common divisor provides a good example of such
a situation. This algorithm is based on the formula. gcd(m, n) = gcd(n, m modn).

Insertion sort:
Insertion sort is a simple sorting algorithm that works the way we sort playing cards in
our hands.
 Insertion sorts works by taking element from the list one by one and inserting them in
their current position into a new sorted list.
 Insertion sort is a sorting algorithm that places an unsorted element at its suitable place
in each iteration.
 This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained
which is always sorted. For example, the lower part of an array is maintained to be
sorted. An element which is to be inserted in this sorted sub-list, has to find its
appropriate place and then it has to be inserted there. Hence the name, insertion sort.
 The array is searched sequentially and unsorted items are moved and inserted into the
sorted sub-list (in the same array). This algorithm is not suitable for large data sets as its
average and worst case complexity are of Ο(n2), where n is the number of items.

Example:

Topological Sort

Motivation for topological sorting


Consider a set of five required courses {C1, C2, C3, C4, C5} a part-time student has to
take in some degree program. The courses can be taken in any order as long as the
following course prerequisites are met: C1 and C2 have no prerequisites, C3 requires C1
and C2, C4 requires C3, and C5 requires C3 and C4. The student can take only one
course per term. In which order should the student take the courses?
The situation can be modeled by a digraph in which vertices represent courses and
directed edges indicate prerequisite requirements.

In terms of this digraph, the question is whether we can list its vertices in such an order
that for every edge in the graph, the vertex where the edge starts is listed before the
vertex where the edge ends. In other words, can you find such an ordering of this
digraph’s vertices? This problem is called topological sorting.

Topological Sort
 It is a linear ordering of graph vertices such that for every directed edge uv from vertex u
to vertex v, u comes before v in the ordering.
 Applicable on DAG (Directed Acyclic Graph)
 Complexity is linear O(V+E) i.e., no of vertices plus no. of edges

Topological Sorting based on DFS


Method
1. Perform a DFS traversal and note the order in which vertices become dead-ends 2.
Reversing this order yields a solution to the topological sorting problem, provided, of
course, no back edge has been encountered during the traversal. If a back edge has been
encountered, the digraph is not a dag, and topological sorting of its vertices is
impossible.
Illustration
a) Digraph for which the topological sorting problem needs to be solved.
b) DFS traversal stack with the subscript numbers indicating the popping off order.
c) Solution to the problem. Here we have drawn the edges of the digraph, and they all
point from left to right as the problem’s statement requires. It is a convenient way to
check visually the correctness of a solution to an instance of the topological sorting
problem.
Topological Sorting using decrease-and-conquer technique:
Method:
The algorithm is based on a direct implementation of the decrease-(by one)-and conquer
technique:
1. Repeatedly, identify in a remaining digraph a source, which is a vertex with no incoming
edges, and delete it along with all the edges outgoing from it. (If there are several sources, break
the tie arbitrarily. If there are none, stop because the problem cannot be solved.)
2. The order in which the vertices are deleted yields a solution to the topological sorting
problem.
Illustration - Illustration of the source-removal algorithm for the topological sorting problem is
given here. On each iteration, a vertex with no incoming edges is deleted from the digraph.

Note: The solution obtained by the source-removal algorithm is different from the one obtained
by the DFS-based algorithm. Both of them are correct, of course; the topological sorting
problem may have several alternative solutions.
Applications of Topological Sorting
 Instruction scheduling in program compilation
 Cell evaluation ordering in spreadsheet formulas,
 Resolving symbol dependencies in linkers.
DIVIDE AND CONQUER METHODOLOGY
A divide and conquer algorithm works by recursively breaking down a
problem into two or more sub-problems of the same (or related) type (divide), until
these become simple enough to be solved directly(conquer).
Divide-and-conquer algorithms work according to the following general plan:
1. A problem is divided into several sub problems of the same type, ideally of
about equal size.
2. The sub problems are solved (typically recursively, though sometimes a
different algorithm is employed, especially when sub problems become
smallenough).
3. If necessary, the solutions to the subproblems are combined to get a solution to
theoriginal problem.

The divide-and-conquer technique as shown in Figure which depicts the case of


dividing a problem into two smaller subproblems, then the subproblems solved
separately. Finally solution to the original problem is done by combining the solutions
of subproblems.

Divide-and-conquer technique.
Divide and conquer methodology can be easily applied on the following problem.
1. Mergesort
2. Quicksort
3. Binarysearch

MERGE SORT:
Merge sort is based on divide-and-conquer technique. It sorts a given array
A[0..n−1] by dividing it into two halves A[0..n/2]−1] and A[n/2]..n−1], sorting each of
them recursively, and then merging the two smaller sorted arrays into a single sorted
one.
ALGORITHM Mergesort(A[0..n − 1])
//Sorts array A[0..n − 1] by recursive mergesort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
if n >1

copy A[0..n/2] − 1] to B[0..n/2] − 1]


copy A[n/2]..n − 1] to C[0..]n/2] − 1]
Mergesort(B[0.. n/2] − 1])
Mergesort(C[0.. ]n/2] − 1])

Merge(B, C, A)

The merging of two sorted arrays can be done as follows. Two pointers (array
indices) are initialized to point to the first elements of the arrays being merged. The
elements pointed to are compared, and the smaller of them is added to a new array being
constructed; after that, the index of the smaller element is incremented to point to its
immediate successor in the array it was copied from. This operation is repeated until
one of the two given arrays is exhausted, and then the remaining elements of the other
arrayare copied to the end of the newarray.
ALGORITHM Merge(B[0..p − 1], C[0..q − 1], A[0..p + q − 1])
//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p − 1] and C[0..q − 1] both sorted
//Output: Sorted array A[0..p + q − 1] of the elements of B
and C i ←0; j ←0; k←0
while i <p and j <q do
if B[i]≤ C[j ]

A[k]←B[i];

i ←i + 1

else
A[k]←C[j ];
j ←j+ 1
k←k + 1

if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else
copy B[i..p − 1] to A[k..p + q − 1]
The operation of the algorithm on the list 8, 3, 2, 9, 7, 1, 5, 4 is illustrated in Figure
FIGURE Example of mergesort operation.

The recurrence relation for the number of key comparisons C(n) is


C(n) = 2C(n/2) + Cmerge(n) for n >1, C(1) = 0.

In the worst case, Cmerge(n) = n − 1, and we have the recurrence


Cworst(n) = 2Cworst(n/2) + n − 1 for n >1, Cworst(1) = 0.

By Master Theorem, Cworst(n) Θ(n log n)

the exact solution to the worst-case recurrence for n = 2k


Cworst(n) = n log2n − n + 1.

For large n, the number of comparisons made by this algorithm in the average case
turns out to be about 0.25n less and hence is also in Θ (n log n).
QUICK SORT
Quicksort is the other important sorting algorithm that is based on the divide-and-
conquer approach. Quick sort divides input elements according to their value. A partition is an
arrangement of the array’s elements so that all the elements to the left of some element A[s]
are less than or equal to A[s], and all the elements to the right of A[s] are greater than or equal
to it:
A[0]...A[s−1] A[s] A[s + 1] . . . A[n −1]

Allare<=A[s] all are>=A[s]

Sort the two subarrays to the left and to the right of A[s] independently. No work
required to combine the solutions to the sub problems.
Here is pseudocode of quicksort: call Quick sort(A[0..n − 1]) where As a partition algorithm use
the Hoare Partition.
FIGURE Tree of recursive calls to Quicksort with input values l and r of subarray bounds and
split position s of a partition obtained.

The number of key comparisons in the best case satisfies the recurrence
Cbest(n) = 2Cbest(n/2) + n for n>1, Cbest(1) =0.

By Master Theorem, Cbest(n) ∈ Θ(n log2 n); solving it exactly for n = 2k yields Cbest(n) = n
log2 n. The total number of key comparisons made will be equal to
Cworst(n) = (n + 1) + n + . . . + 3 = ((n + 1)(n + 2))/2− 3 ∈Θ(n2).

Binary tree traversals:


A binary tree T is defined as a finite set of nodes that is either empty or consists of a root and two
disjoint binary trees TL and TR called, respectively, the left and right subtree of the root. We usually
think of a binary tree as a special case of an ordered tree.
Since the definition itself divides a binary tree into two smaller structures of the same type, the left
subtree and the right subtree, many problems about binary trees can be solved by applying the
divide-and-conquer technique.
Let us consider a recursive algorithm for computing the height of a binary tree. Recall that the height
is defined as the length of the longest path from the root to a leaf. Hence, it can be computed as the
maximum of the heights of the root’s left and right subtrees plus 1.

 The most important divide-and-conquer algorithms for binary trees are the three classic
traversals: preorder, inorder, and postorder.
 All three traversals visit nodes of a binary tree recursively, i.e., by visiting the tree’s root and
its left and right subtrees.
 They differ only by the timing of the root’s visit:
 In the preorder traversal, the root is visited before the left and right subtrees are visited (in
that order).
 In the inorder traversal, the root is visited after visiting its left subtree but before visiting the
right subtree.
 In the postorder traversal, the root is visited after visiting the left and right subtrees (in that
order)
MULTIPLICATION OF LARGE INTEGERS

Some applications like modern cryptography require manipulation of integers that


are over 100 decimal digits long. Since such integers are too long to fit in a single word of a
modern computer, they require special treatment.

In the conventional pen-and-pencil algorithm for multiplying two n-digit integers, each
of the n digits of the first number is multiplied by each of the n digits of the second number
for the total of n2 digit multiplications.

The divide-and-conquer method does the above multiplication in less than n2 digit
multiplications.

Example: 23 ∗ 14 = (2 · 101 + 3 · 100) ∗ (1 · 101 + 4 ·100)

= (2 ∗ 1)102 + (2 ∗ 4 + 3 ∗ 1)101 + (3 ∗ 4)100

= 2· 102 + 11· 101 + 12·100

= 3· 102 + 2· 101 + 2·100

= 322

The term 2∗4+3∗1=(2+3)∗(1+4)–(2∗1)−(3∗4).


Here (2∗1)and (3∗4)are already computed used. So only one multiplication only we have to do.

For any pair of two-digit numbers a = a1a0 and b = b1b0, their product c can be computed by

the formula c = a ∗ b = c2102 + c1101 + c0,

where

c2 = a1∗ b1 is the product of their first digits,

c0 = a0∗ b0 is the product of their second digits,

c1=(a1+a0)∗(b1+b0)−(c2+c0)is the product of the sum of

the a’s digits and the sum of the b’s digits minus the sum

of c2 and c0.
Now we apply this trick to multiplying two n-digit integers a and b where n is a
positiveeven number. Let us divide both numbers in the middle to take advantage of the
divide-and- conquer technique.

We denote the first half of the a’s digits by a1 and the second half by a0; for b, the
notations are b1 and b0, respectively. In these notations, a = a1a0 implies that a = a110n/2
+ a0 and b = b1b0 implies that b = b110n/2 + b0. Therefore, taking advantage of the same
trick we used for two-digit numbers, we get

C = a ∗ b = (a110n/2 + a0) * (b110n/2 + b0)

= (a1 * b1)10n + (a1 * b0 + a0 * b1)10n/2 + (a0 * b0)

= c210n + c110n/2 + c0,

where

c2 = a1* b1 is the product of their first

halves, c0== a0* b0 is the product of their

second halves,c1 = (a1 + a0) * (b1 + b0) −

(c2 + c0)

If n/2 is even, we can apply the same method for computing the products c2, c0, and
c1. Thus, if n is a power of 2, we have a recursive algorithm for computing the product of
two n-digit integers. In its pure form, the recursion is stopped when n becomes 1. It can
also be stopped when we deem n small enough to multiply the numbers of that size directly.

The multiplication of n-digit numbers requires three multiplications of n/2-digit


numbers,the recurrence for the number of multiplications M(n) is M(n) = 3M(n/2) for
n >1, M(1)
=1.Solving it by backward substitutions for n = 2kyields
M(2k) = 3M(2k−1)= 3[3M(2k−2)]= 32M(2k−2)= . . .

= 3iM(2k−i)= . . .= 3kM(2k−k)= 3k.

(Since k = log2n)

M(n) =3 log2 n = nlog 3 ≈ n1.585.

Example: For instance: a = 2345, b =


6137,i.e., n=4. Then C = a * b
= (23*102+45)*(61*102+37)

C = a ∗ b = (a110n/2 + a0) * (b110n/2 + b0)

= (a1 * b1)10n + (a1 * b0 + a0 * b1)10n/2 + (a0 * b0)

= (23 * 61)104 + (23 * 37 + 45 * 61)102 + (45 * 37)


= 1403•104 + 3596•102 + 1665

= 14391265

STRASSEN’S MATRIX MULTIPLICATION

The Strassen’s Matrix Multiplication find the product C of two 2 × 2 matrices A and B

with just seven multiplications as opposed to the eight required by the brute-force
algorithm.where

Thus, to multiply two 2 × 2 matrices, Strassen’s algorithm makes 7 multiplications


and 18 additions/subtractions, whereas the brute-force algorithm requires 8
multiplications and4 additions. These numbers should not lead us to multiplying 2 × 2
matrices by Strassen’salgorithm. Its importance stems from its asymptotic superiority
as matrix order n goes to infinity.

Let A and B be two n × n matrices where n is a power of 2. (If n is not a power of


2, matrices can be padded with rows and columns of zeros.) We can divide A, B, and
theirproduct C into four n/2 × n/2 submatrices each as follows:

The value C00 can be computed either as A00 * B00 + A01 * B10 or as M1 + M4 − M5+ M7
where M1, M4, M5, and M7 are found by Strassen’s formulas, with the numbers replaced by the
corresponding submatrices. The seven products of n/2 × n/2 matrices are computed recursively by
Strassen’s matrix multiplication algorithm.

The asymptotic efficiency of Strassen’s matrix multiplication algorithm

If M(n) is the number of multiplications made by Strassen’s algorithm in multiplying


two n×n matrices, where n is a power of 2, The recurrence relation is M(n) =7M(n/2) for n >
1, M(1)=1.
Since n = 2k,

M(2k) = 7M(2k−1)= 7[7M(2k−2)]= 72M(2k−2)= . . .= 7iM(2k−i)= . . .= 7kM(2k−k) =


7kM(20) = 7kM(1)= 7k(1) (Since M(1)=1)

M(2k) = 7k.

Since k = log2n, M(n) = 7logn= nlog 7 ≈n2.807

which is smaller than n3 required by the brute-force algorithm.

Since this savings in the number of multiplications was achieved at the expense
of making extra additions, we must check the number of additions A(n) made by
Strassen’s algorithm. To multiply two matrices of order n>1, the algorithm needs to
multiply seven matrices of order n/2 and make 18 additions/subtractions of matrices
of size n/2; when n
= 1, no additions are made since two numbers are simply multiplied. These
observations yield the following recurrence relation:

A(n) = 7A(n/2) + 18(n/2)2 for n >1, A(1) = 0.

By closed-form solution to this recurrence and the Master Theorem, A(n) ∈ Θ(nlog7).
which is
2
A better efficiency class than Θ(n3)of the brute-force method.

You might also like