0% found this document useful (0 votes)
12 views69 pages

Module2 2024

The document discusses various algorithm design strategies, focusing on exhaustive search, divide-and-conquer, and decrease-and-conquer techniques. It provides examples such as the Traveling Salesman Problem, Knapsack Problem, and Mergesort, highlighting their efficiencies and methodologies. Additionally, it covers advanced topics like Strassen's Matrix Multiplication and the implications of algorithm complexity in practical applications.

Uploaded by

Sujatha JSSATE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views69 pages

Module2 2024

The document discusses various algorithm design strategies, focusing on exhaustive search, divide-and-conquer, and decrease-and-conquer techniques. It provides examples such as the Traveling Salesman Problem, Knapsack Problem, and Mergesort, highlighting their efficiencies and methodologies. Additionally, it covers advanced topics like Strassen's Matrix Multiplication and the implications of algorithm complexity in practical applications.

Uploaded by

Sujatha JSSATE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

MODULE 2

Exhaustive Search
A brute force solution to a problem involving search for an element with a
special property, usually among combinatorial objects such as permutations,
combinations, or subsets of a set.

Method:
– generate a list of all potential solutions to the problem in a systematic
manner

– evaluate potential solutions one by one, disqualifying infeasible ones and,


for an optimization problem, keeping track of the best one found so far

– when search ends, announce the solution(s) found


Example 1: Traveling Salesman Problem

• Given n cities with known distances between each pair, find the shortest tour that
passes through all the cities exactly once before returning to the starting city
• Alternatively: Find shortest Hamiltonian circuit in a weighted connected graph
• Example:

2
a b
5 3
8 4

c 7 d
TSP by Exhaustive Search

Tour Cost
a→b→c→d→a 2+3+7+5 = 17
a→b→d→c→a 2+4+7+8 = 21
a→c→b→d→a 8+3+4+5 = 20
a→c→d→b→a 8+7+4+2 = 21
a→d→b→c→a 5+4+3+8 = 20
a→d→c→b→a 5+7+3+2 = 17

More tours?
Less tours?

Efficiency: Θ((n-1)!)
Example 2: Knapsack Problem
Given n items:
– weights: w1 w2 … wn
– values: v1 v2 … vn
– a knapsack of capacity W
Find most valuable subset of the items that fit into the knapsack

Example: Knapsack capacity W=16


item weight value
2 2 $20
3 5 $30
4 10 $50
5 5 $10
Knapsack Problem by Exhaustive Search
Subset Total weight Total value
{1} 2 $20
{2} 5 $30
{3} 10 $50
{4} 5 $10
{1,2} 7 $50
{1,3} 12 $70
{1,4} 7 $30
{2,3} 15 $80
{2,4} 10 $40
{3,4} 15 $60
{1,2,3} 17 not feasible
{1,2,4} 12 $60
{1,3,4} 17 not feasible
{2,3,4} 20 not feasible
{1,2,3,4} 22 not feasible
Efficiency: Ω
(2N)
Example 3: The Assignment Problem
There are n people who need to be assigned to n jobs, one person per job. The cost of
assigning person i to job j is C[i,j]. Find an assignment that minimizes the total cost.

Job 1 Job 2 Job 3 Job 4


Person 1 9 2 7 8
Person 2 6 4 3 7
Person 3 5 8 1 8
Person 4 7 6 9 4

Algorithmic Plan: Generate all legitimate assignments, compute


their costs, and select the cheapest one.
How many assignments are there?
Pose the problem as the one about a cost matrix:
Assignment Problem by Exhaustive Search
9 2 7 8
6 4 3 7
5 8 C1 = 8
7 6 9 4

Assignment (col.#s) Total Cost


1, 2, 3, 4 9+4+1+4=18
1, 2, 4, 3 9+4+8+9=30
1, 3, 2, 4 9+3+8+4=24
1, 3, 4, 2 9+3+8+6=26
1, 4, 2, 3 9+7+8+9=33
1, 4, 3, 2 9+7+1+6=23
etc.

8
Final Comments on Exhaustive Search

• Exhaustive-search algorithms run in a realistic amount of time only on very small


instances

• In some cases, there are much better alternatives!


– Euler circuits
– shortest paths
– minimum spanning tree
– assignment problem

• In many cases, exhaustive search or its variation is the only known way to get exact
solution
Divide-and-Conquer

The most-well known algorithm design strategy:


1. A problem is divided into several subproblems of the same type, ideally of about equal size.
2. The subproblems are solved (typically recursively, though sometimes a different algorithm is
employed, especially when subproblems become small enough).
3. If necessary, the solutions to the subproblems are combined to get a solution to the original
problem.
Divide-and-Conquer Technique

a problem of size
n (instance)

subproblem 1 subproblem 2
of size n/2 of size n/2

a solution to a solution to
subproblem 1 subproblem 2

It generally leads to a
recursive algorithm!
a solution to
the original problem
Divide-and-Conquer Examples

• Sorting: mergesort and quicksort

• Binary tree traversals

• Binary search

• Multiplication of large integers

• Matrix multiplication: Strassen’s algorithm

• Closest-pair of points algorithms


• In the most typical case of divide-and-conquer a problem’s
instance of size n is divided into 2 instances of size n/2.
• More generally, an instance of size n can be divided into b
instances of size n/b, with a of them needing to be solved.
(Here, a and b are constants; a ≥ 1 and b > 1.)
• Assuming that size n is a power of b to simplify our analysis,
we get the following recurrence for the running time

where f (n) is a function that accounts for the time spent on dividing an instance
of size n into instances of size n/b and combining their solutions
General Divide-and-Conquer Recurrence
T(n) = aT(n/b) + f (n)
If f(n) ∈ Θ(nd), d ≥ 0

Master Theorem: If a < bd, T(n) ∈ Θ(nd)


If a = bd, T(n) ∈ Θ(nd log n)
If a > bd, T(n) ∈ Θ(nlog b a )

Note: The same results hold with O instead of Θ.

Examples: T(n) = 4T(n/2) + n ⇒ T(n) ∈ ? Θ(n^2)


T(n) = 4T(n/2) + n2 ⇒ T(n) ∈ ? Θ(n^2log n)
T(n) = 4T(n/2) + n3 ⇒ T(n) ∈ ? Θ(n^3)
Mergesort
• Split array A[0..n-1] into about equal halves and make copies of each
half in arrays B and C
• Sort arrays B and C recursively
• Merge sorted arrays B and C into array A as follows:
– Repeat the following until no elements remain in one of the
arrays:
•compare the first elements in the remaining unprocessed
portions of the arrays
•copy the smaller of the two into A, while incrementing the index
indicating the unprocessed portion of that array
– Once all elements in one of the arrays are processed, copy the
remaining unprocessed elements from the other array into A.
Pseudocode of Mergesort
Pseudocode of Merge
Mergesort Example
Analysis of Mergesort

• All cases have same efficiency: Θ(n log n)

T(n) = 2T(n/2) + Θ(n), T(1) = 0

• Space requirement: Θ(n) (not in-place)


• An in-place algorithm is an algorithm that does not need an extra space and
produces an output in the same memory that contains the data by
transforming the input ‘in-place’. However, a small constant extra space used
for variables is allowed.

• Can be implemented without recursion (bottom-up)


Quicksort

• Select a pivot (partitioning element) – here, the first element


• Rearrange the list so that all the elements in the first s positions are smaller than or
equal to the pivot and all the elements in the remaining n-s positions are larger than or
equal to the pivot (see next slide for an algorithm)

A[i]≤p A[i]≥p

• Exchange the pivot with the last element in the first (i.e., ≤) subarray — the pivot is now
in its final position
• Sort the two subarrays recursively
Partitioning Algorithm

<
Quicksort Example
0 1 2 3 4 5 6 7
--------------------------------------------------------------------------------------------------------------------------------------
i j
5 3 1 9 8 2 4 7

i j
5 3 1 9 8 2 4 7

i j
5 3 1 4 8 2 9 7

i j
5 3 1 4 8 2 9 7

i j
5 3 1 4 2 8 9 7

j i
5 3 1 4 2 8 9 7

2 3 1 4 5 8 9 7

****************************************** *******************************
• The number of key comparisons made before a partition is achieved is n + 1 if
the scanning indices cross over and n if they coincide
• If all the splits happen in the middle of corresponding subarrays, we will have
the best case. The number of key comparisons in the best case satisfies the
recurrence
• In the worst case, all the splits will be skewed to the extreme: one of the two
subarrays will be empty, and the size of the other will be just 1 less than the size of
the subarray being partitioned.
• This unfortunate situation will happen, in particular, for increasing arrays, i.e., for
inputs for which the problem is already solved

• The total number of key comparisons made will be equal to:


Analysis of Quicksort

• Best case: split in the middle — Θ(n log n)


• Worst case: sorted array! — Θ(n2) T(n) = T(n-1) + Θ(n)
• Average case: random arrays — Θ(n log n)
Multiplication of Large Integers

• Some applications, notably modern cryptography, require manipulation of integers that are over 100 decimal digits
long. Since such integers are too long to fit in a single word of a modern computer, they require special treatment.
• This practical need supports investigations of algorithms for efficient manipulation of large integers.
• Obviously, if we use the conventional pen-and-pencil algorithm for multiplying two n-digit integers, each of the n
digits of the first number is multiplied by each of the n digits of the second number for the total of n2 digit
multiplications.
• (If one of the numbers has fewer digits than the other, we can pad the shorter number with leading zeros to
equalize their lengths.)
• Though it might appear that it would be impossible to design an algorithm with fewer than n2 digit multiplications,
this turns out not to be the case.
• The miracle of divide-and-conquer comes to the rescue to accomplish this feat.
• To demonstrate the basic idea of the algorithm, let us start with a case of two-digit
integers, say, 23 and 14. These numbers can be represented as follows:
Conventional Matrix Multiplication

Strassen’s Matrix Multiplication

• Brute-force algorithm
c00 c01 a00 a01 b00 b01
= *
c10 c11 a10 a11 b10 b11

a00 * b00 + a01 * b10 a00 * b01 + a01 * b11


=
a10 * b00 + a11 * b10 a10 * b01 + a11 * b11

Efficiency class in general: Θ (n3)


8 multiplications

4 additions
7 multiplications
18 additions
Strassen’s Matrix Multiplication

Strassen observed that the product of two matrices can be computed in


general as follows:

C00 C01 A00 A01 B00 B01


= *
C10 C11 A10 A11 B10 B11

M 1 + M4 - M5 + M7 M 3 + M5
=
M 2 + M4 M1 + M 3 - M 2 + M6
Formulas for Strassen’s Algorithm
M1 = (A00 + A11) * (B00 + B11)

M2 = (A10 + A11) * B00

M3 = A00 * (B01 - B11)

M4 = A11 * (B10 - B00)

M5 = (A00 + A01) * B11

M6 = (A10 - A00) * (B00 + B01)

M7 = (A01 - A11) * (B10 + B11)


Analysis of Strassen’s Algorithm
If n is not a power of 2, matrices can be padded with zeros.

Number of multiplications:
M(n) = 7M(n/2), M(1) = 1
Solution: M(n) = 7log 2n = nlog 27 ≈ n2.807 vs. n3 of brute-force alg.

Algorithms with better asymptotic efficiency are known but they are even
more complex and not used in practice.
Decrease and Conquer

• The decrease-and-conquer technique is based on exploiting the relationship


between a solution to a given instance of a problem and a solution to its smaller
instance.
• Once such a relationship is established, it can be exploited either top down or
bottom up.
There are three major variations of decrease-and-conquer:
• decrease by a constant
• decrease by a constant factor
• variable size decrease
• In the decrease-by-a-constant variation, the size of an instance is reduced by the
same constant on each iteration of the algorithm.
• Typically, this constant is equal to one , although other constant size reductions do
happen occasionally.
• Consider, as an example, the exponentiation problem of computing an where a = 0
and n is a nonnegative integer.
• The relationship between a solution to an instance of size n and an instance of size
n − 1 is obtained by the obvious formula an = an−1 . a.
• So the function f (n) = an can be computed either “top down” by using its recursive
definition or “bottom up” by multiplying 1 by a n times.
• The decrease-by-a-constant-factor technique suggests reducing a problem instance
by the same constant factor on each iteration of the algorithm.
• In most applications, this constant factor is equal to two.
• For an example, let us revisit the exponentiation problem. If the instance of size n is
to compute an, the instance of half its size is to compute an/2, with the
• obvious relationship between the two: an = (an/2)2. But since we consider here
instances with integer exponents only, the former does not work for odd n.
• If n is odd, we have to compute an−1 by using the rule for even-valued exponents and
then multiply the result by a.
• variable-size-decrease variety of decrease-and-conquer, the
size-reduction pattern varies from one iteration of an algorithm to
another.
• Euclid’s algorithm for computing the greatest common divisor provides
a good example of such a situation. Recall that this algorithm is based
on the formula
gcd(m, n) = gcd(n, m mod n).
gcd(248, 60)=gcd(60,8)=gcd(8,4)=(4,0)= 4
Insertion Sort

• An application of the decrease-by-one technique to sorting an array A[0..n − 1].


• Following the technique’s idea, we assume that the smaller problem of sorting the
array A[0..n − 2] has already been solved to give us a sorted array of size n − 1:
A[0] ≤ ... ≤ A[n − 2].
• How can we take advantage of this solution to the smaller problem to get a solution
to the original problem by taking into account the element A[n − 1]?
• Obviously, all we need is to find an appropriate position for A[n − 1] among the
sorted elements and insert it there.
• This is usually done by scanning the sorted subarray from right to left until the first
element smaller than or equal to A[n − 1] is encountered to insert A[n − 1] right after
that element.
Source removal method to solve Topological sorting

• The algorithm is based on a direct implementation of the decrease-(by


one)-and-conquer technique: repeatedly, identify in a remaining digraph a source,
which is a vertex with no incoming edges, and delete it along with all the edges
outgoing from it.
• If there are several sources, break the tie arbitrarily. If there are none, stop because
the problem cannot be solved—see Problem 6a in this section’s exercises.
• The order in which the vertices are deleted yields a solution to the topological
sorting problem.
DFS Based algorithm for Topological sorting
DFS Based algorithm for Topological sorting

• The DFS based algorithm is a simple application of depth-first search: perform a DFS
traversal and note the order in which vertices become dead-ends (i.e., popped off
the traversal stack).
• Reversing this order yields a solution to the topological sorting problem, provided, of
course, no back edge has been encountered during the traversal.
• If a back edge has been encountered, the digraph is not a DAG, and topological
sorting of its vertices is impossible

You might also like