Ada Mod 2
Ada Mod 2
Time efficiency
• Wecangetallthetoursbygeneratingallthepermutationsofn−1intermediatecities from a particular
city. i.e. (n - 1)!
• Consider two intermediate vertices, say and c, and then only permutations in which b precedes
c. (This trick implicitly defines a tour’s direction.)
An inspection of Figure reveals three pairs of tours that differ only by their direction. Hence, we
could cut the number of vertex permutations by half because cycle total lengths in both
directions are same. The total number of permutations needed is still ½[ (n − 1)]!, which makes
the exhaustive- search approach impractical for large n. It is useful for very small values of n.
Knapsack problem:
Given n items of known weights w1, w2, . . ., wn and values v1, v2, . . . , vn and a knapsack of
capacity W, find the most valuable subset of the items that fit into the knapsack.
Real time examples:
• A Thief who wants to steal the most valuable loot that fits into his knapsack,
• A transport plane that has to deliver the most valuable set of items to a remote location
without exceeding the plane’s capacity.
The exhaustive-search approach to this problem leads to generating all the subsets of the set of n
items given, computing the total weight of each subset in order to identify feasible subsets (i.e.,
the ones with the total weight not exceeding the knapsack capacity), and finding a subset of the
largest value among them.
Example:1) Fig.a
Time efficiency: As given in the example, the solution to the instance of Figure a is given in
Figure (b)Since the number of subsets of an n-element set is 2n , the exhaustive search leads to a
Ω(2n ) algorithm, no matter how efficiently individual subsets are generated.
Note: Exhaustive search of both the traveling salesman and knapsack problems leads to
extremely inefficient algorithms on every input. In fact, these two problems are the best known
examples of NP-hard problems. No polynomial-time algorithm is known for any NP-hard
problem. Moreover, most computer scientists believe that such algorithms do not exist. Some
sophisticated approaches like backtracking and branch-and-bound enable us to solve some
instances but not all instances of these in less than exponential time. Alternatively, we can use
one of many approximation algorithms.
Insertion sort:
Insertion sort is a simple sorting algorithm that works the way we sort playing cards in
our hands.
Insertion sorts works by taking element from the list one by one and inserting them in
their current position into a new sorted list.
Insertion sort is a sorting algorithm that places an unsorted element at its suitable place
in each iteration.
This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained
which is always sorted. For example, the lower part of an array is maintained to be
sorted. An element which is to be inserted in this sorted sub-list, has to find its
appropriate place and then it has to be inserted there. Hence the name, insertion sort.
The array is searched sequentially and unsorted items are moved and inserted into the
sorted sub-list (in the same array). This algorithm is not suitable for large data sets as its
average and worst case complexity are of Ο(n2), where n is the number of items.
Example:
Topological Sort
In terms of this digraph, the question is whether we can list its vertices in such an order
that for every edge in the graph, the vertex where the edge starts is listed before the
vertex where the edge ends. In other words, can you find such an ordering of this
digraph’s vertices? This problem is called topological sorting.
Topological Sort
It is a linear ordering of graph vertices such that for every directed edge uv from vertex u
to vertex v, u comes before v in the ordering.
Applicable on DAG (Directed Acyclic Graph)
Complexity is linear O(V+E) i.e., no of vertices plus no. of edges
Note: The solution obtained by the source-removal algorithm is different from the one obtained
by the DFS-based algorithm. Both of them are correct, of course; the topological sorting
problem may have several alternative solutions.
Applications of Topological Sorting
Instruction scheduling in program compilation
Cell evaluation ordering in spreadsheet formulas,
Resolving symbol dependencies in linkers.
DIVIDE AND CONQUER METHODOLOGY
A divide and conquer algorithm works by recursively breaking down a
problem into two or more sub-problems of the same (or related) type (divide), until
these become simple enough to be solved directly(conquer).
Divide-and-conquer algorithms work according to the following general plan:
1. A problem is divided into several sub problems of the same type, ideally of
about equal size.
2. The sub problems are solved (typically recursively, though sometimes a
different algorithm is employed, especially when sub problems become
smallenough).
3. If necessary, the solutions to the subproblems are combined to get a solution to
theoriginal problem.
Divide-and-conquer technique.
Divide and conquer methodology can be easily applied on the following problem.
1. Mergesort
2. Quicksort
3. Binarysearch
MERGE SORT:
Merge sort is based on divide-and-conquer technique. It sorts a given array
A[0..n−1] by dividing it into two halves A[0..n/2]−1] and A[n/2]..n−1], sorting each of
them recursively, and then merging the two smaller sorted arrays into a single sorted
one.
ALGORITHM Mergesort(A[0..n − 1])
//Sorts array A[0..n − 1] by recursive mergesort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
if n >1
Merge(B, C, A)
The merging of two sorted arrays can be done as follows. Two pointers (array
indices) are initialized to point to the first elements of the arrays being merged. The
elements pointed to are compared, and the smaller of them is added to a new array being
constructed; after that, the index of the smaller element is incremented to point to its
immediate successor in the array it was copied from. This operation is repeated until
one of the two given arrays is exhausted, and then the remaining elements of the other
arrayare copied to the end of the newarray.
ALGORITHM Merge(B[0..p − 1], C[0..q − 1], A[0..p + q − 1])
//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p − 1] and C[0..q − 1] both sorted
//Output: Sorted array A[0..p + q − 1] of the elements of B
and C i ←0; j ←0; k←0
while i <p and j <q do
if B[i]≤ C[j ]
A[k]←B[i];
i ←i + 1
else
A[k]←C[j ];
j ←j+ 1
k←k + 1
if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else
copy B[i..p − 1] to A[k..p + q − 1]
The operation of the algorithm on the list 8, 3, 2, 9, 7, 1, 5, 4 is illustrated in Figure
FIGURE Example of mergesort operation.
For large n, the number of comparisons made by this algorithm in the average case
turns out to be about 0.25n less and hence is also in Θ (n log n).
QUICK SORT
Quicksort is the other important sorting algorithm that is based on the divide-and-
conquer approach. Quick sort divides input elements according to their value. A partition is an
arrangement of the array’s elements so that all the elements to the left of some element A[s]
are less than or equal to A[s], and all the elements to the right of A[s] are greater than or equal
to it:
A[0]...A[s−1] A[s] A[s + 1] . . . A[n −1]
Sort the two subarrays to the left and to the right of A[s] independently. No work
required to combine the solutions to the sub problems.
Here is pseudocode of quicksort: call Quick sort(A[0..n − 1]) where As a partition algorithm use
the Hoare Partition.
FIGURE Tree of recursive calls to Quicksort with input values l and r of subarray bounds and
split position s of a partition obtained.
The number of key comparisons in the best case satisfies the recurrence
Cbest(n) = 2Cbest(n/2) + n for n>1, Cbest(1) =0.
By Master Theorem, Cbest(n) ∈ Θ(n log2 n); solving it exactly for n = 2k yields Cbest(n) = n
log2 n. The total number of key comparisons made will be equal to
Cworst(n) = (n + 1) + n + . . . + 3 = ((n + 1)(n + 2))/2− 3 ∈Θ(n2).
The most important divide-and-conquer algorithms for binary trees are the three classic
traversals: preorder, inorder, and postorder.
All three traversals visit nodes of a binary tree recursively, i.e., by visiting the tree’s root and
its left and right subtrees.
They differ only by the timing of the root’s visit:
In the preorder traversal, the root is visited before the left and right subtrees are visited (in
that order).
In the inorder traversal, the root is visited after visiting its left subtree but before visiting the
right subtree.
In the postorder traversal, the root is visited after visiting the left and right subtrees (in that
order)
MULTIPLICATION OF LARGE INTEGERS
In the conventional pen-and-pencil algorithm for multiplying two n-digit integers, each
of the n digits of the first number is multiplied by each of the n digits of the second number
for the total of n2 digit multiplications.
The divide-and-conquer method does the above multiplication in less than n2 digit
multiplications.
= 322
For any pair of two-digit numbers a = a1a0 and b = b1b0, their product c can be computed by
where
the a’s digits and the sum of the b’s digits minus the sum
of c2 and c0.
Now we apply this trick to multiplying two n-digit integers a and b where n is a
positiveeven number. Let us divide both numbers in the middle to take advantage of the
divide-and- conquer technique.
We denote the first half of the a’s digits by a1 and the second half by a0; for b, the
notations are b1 and b0, respectively. In these notations, a = a1a0 implies that a = a110n/2
+ a0 and b = b1b0 implies that b = b110n/2 + b0. Therefore, taking advantage of the same
trick we used for two-digit numbers, we get
where
(c2 + c0)
If n/2 is even, we can apply the same method for computing the products c2, c0, and
c1. Thus, if n is a power of 2, we have a recursive algorithm for computing the product of
two n-digit integers. In its pure form, the recursion is stopped when n becomes 1. It can
also be stopped when we deem n small enough to multiply the numbers of that size directly.
(Since k = log2n)
= 14391265
The Strassen’s Matrix Multiplication find the product C of two 2 × 2 matrices A and B
with just seven multiplications as opposed to the eight required by the brute-force
algorithm.where
The value C00 can be computed either as A00 * B00 + A01 * B10 or as M1 + M4 − M5+ M7
where M1, M4, M5, and M7 are found by Strassen’s formulas, with the numbers replaced by the
corresponding submatrices. The seven products of n/2 × n/2 matrices are computed recursively by
Strassen’s matrix multiplication algorithm.
M(2k) = 7k.
Since this savings in the number of multiplications was achieved at the expense
of making extra additions, we must check the number of additions A(n) made by
Strassen’s algorithm. To multiply two matrices of order n>1, the algorithm needs to
multiply seven matrices of order n/2 and make 18 additions/subtractions of matrices
of size n/2; when n
= 1, no additions are made since two numbers are simply multiplied. These
observations yield the following recurrence relation:
By closed-form solution to this recurrence and the Master Theorem, A(n) ∈ Θ(nlog7).
which is
2
A better efficiency class than Θ(n3)of the brute-force method.