0% found this document useful (0 votes)
17 views78 pages

CA4CRT10 - Design and Analysis of Algorithm (Core)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views78 pages

CA4CRT10 - Design and Analysis of Algorithm (Core)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

CA4CRT10 – Design and Analysis of Algorithm (Core)

Unit I
Introduction
The word algorithm comes from the name of a Persian author, Abu Ja'far Mohammed ibn Musa al
Khowarizmi (c.825A.D.), who wrote a textbook on mathematics. This word has taken on a special
significance in computer science, where "algorithm" has come to refer to a method that can be used
by a computer for the solution of a problem. This is what makes algorithm different from words
such as process, technique, or method.

Definition of Algorithm
Definition1.1 [Algorithm]: An algorithm is a finite set of instructions that, if followed,
accomplishes a particular task. In addition, all algorithms must satisfy the following criteria:

1. Input. Zero or more quantities are externally supplied.


2. Output. At least one quantity is produced.
3. Definiteness. Each instruction is clear and unambiguous.
4. Finiteness. If we trace out the instructions of an algorithm, then for All cases, the algorithm
terminates after a finite number of steps.
5. Effectiveness. Every instruction must be very basic so that it can be carried out, in principle,
by a person using only pencil and paper. It is not enough that each operation be definite as in
ccriterion3; it also must be feasible.

An algorithm is composed of a finite set of steps, each of which may require one or more
operations. The possibility of a computer carrying out these operations necessitates that certain
constraints be placed on the type of operations an algorithm can include.

Criteria1and 2 require that an algorithm produce one or more outputs and have zero or more inputs
that are externally supplied. According to criterion 3, each operation must be definite, meaning that
it must be perfectly clear what should be done. Directions such as "add 6 or 7 to x" or "compute
5/0"are not permitted because it is not clear which of the two possibilities should be done or what
the result is.

The fourth criterion for algorithms we assume that they terminate after a finite number of
operations. A related consideration is that the time for termination should be reasonably short. For
example, an algorithm could be devised that decides whether any given position in the game of
chess is a winning position. The algorithm works by examining all possible moves and counter
moves that could be made from the starting position. The difficulty with this algorithm is that even
using the most modern computers, it may take billions of years to make the decision. We must be
very concerned with analyzing the efficiency of each of our algorithms.

Criterion 5 requires that each operation be effective; each step must be Such that it can, at least in
principle, be done by a person using pencil and paper in a finite amount of time. Performing
arithmetic on integers is an example of an effective operation, but arithmetic with real numbers is
not, since some values may be expressible only by infinitely long decimal expansion. Adding two
such numbers would violate the effectiveness property.

Algorithms that are definite and effective are also called computational procedures. One important
example of computational procedure is the operating system of a digital computer. This procedure
is designed to control the execution of jobs, n such a way that when no jobs are available, it does
not terminate but continues in a waiting state until a new job is entered. Though computational
procedures include important examples such as this one, we restrict our study to computational
procedures that always terminate.

To help us achieve the criterion of definiteness, algorithms are written in a programming language.
Such languages are designed so that each legitimate sentence has a unique meaning. A program is
the expression of an algorithm in a programming language. Sometimes words such as procedure,
function, and subroutine are used synonymously for program.

Algorithm Design Techniques

 Divide and Conquer Method

In the divide and conquer approach, the problem is divided into several small sub-problems. Then
the sub-problems are solved recursively and combined to get the solution of the original problem.
The divide and conquer approach involves the following steps at each level −
 Divide − The original problem is divided into sub-problems.

 Conquer − The sub-problems are solved recursively.

 Combine − The solutions of the sub-problems are combined together to get the solution of
the original problem.

The divide and conquer approach is applied in the following algorithms −

 Binary search

 Quick sort

 Merge sort

 Integer multiplication

 Matrix inversion

 Matrix multiplication

 Greedy Method
In greedy algorithm of optimizing solution, the best solution is chosen at any moment. A greedy
algorithm is very easy to apply to complex problems. It decides which step will provide the most
accurate solution in the next step. This algorithm is a called greedy because when the optimal
solution to the smaller instance is provided, the algorithm does not consider the total program as a
whole. Once a solution is considered, the greedy algorithm never considers the same solution
again.
A greedy algorithm works recursively creating a group of objects from the smallest possible
component parts. Recursion is a procedure to solve a problem in which the solution to a specific
problem is dependent on the solution of the smaller instance of that problem.

 Dynamic Programming

Dynamic programming is an optimization technique, which divides the problem into smaller sub-
problems and after solving each sub-problem, dynamic programming combines all the solutions to
get ultimate solution. Unlike divide and conquer method, dynamic programming reuses the
solution to the sub-problems many times.
Recursive algorithm for Fibonacci Series is an example of dynamic programming.

 Backtracking Algorithm

Backtracking is an optimization technique to solve combinational problems. It is applied to both


programmatic and real-life problems. Eight queen problem, Sudoku puzzle and going through a
maze are popular examples where backtracking algorithm is used.
In backtracking, we start with a possible solution, which satisfies all the required conditions. Then
we move to the next level and if that level does not produce a satisfactory solution, we return one
level back and start with a new option.

 Branch and Bound

A branch and bound algorithm is an optimization technique to get an optimal solution to the
problem. It looks for the best solution for a given problem in the entire space of the solution. The
bounds in the function to be optimized are merged with the value of the latest best solution. It
allows the algorithm to find parts of the solution space completely.
The purpose of a branch and bound search is to maintain the lowest-cost path to a target. Once a
solution is found, it can keep improving the solution. Branch and bound search is implemented in
depth-bounded search and depth–first search.

 Linear Programming

Linear programming describes a wide class of optimization job where both the optimization
criterion and the constraints are linear functions. It is a technique to get the best outcome like
maximum profit, shortest path, or lowest cost.
In this programming, we have a set of variables and we have to assign absolute values to them to
satisfy a set of linear equations and to maximize or minimize a given linear objective function.

Algorithm Analysis
The study of algorithms includes many important and active areas of research. There are four
distinct areas of study and they are specified below:
1. How to devise algorithms - Creating an algorithm is an art which may never be fully
automated. A major goal is to study various design techniques that have proven to be useful in that
they have often yielded good algorithm. By mastering these design strategies, it will become easier
to devise new and useful algorithms. Dynamic programming is one such technique. Some of the
techniques are especially useful in fields other than computer science such as operations research
and electrical engineering. All of the approaches we consider have applications in a variety of areas
including computer science. Other some important design techniques are linear, nonlinear, and
integer programming.

2. How to validate algorithms – Once an algorithm is devised, it is necessary to show that it


computes the correct answer for all possible legal inputs. We refer to this process as algorithm
validation. The purpose of the validation is to assure us that this algorithm will work correctly
independently of the issues concerning the programming language it will eventually be written in.
Once the validity of the method has been shown, a program can be written and a second phase
begins. This phase is referred to as program proving or sometimes as program verification.

A proof of correctness requires that the solution be stated in two forms. One form is usually as a
program which is annotated by a set of assertions about the input and output variables of the
program. These assertions are often expressed in the predicate calculus. The second form is called a
specification, and this may also be expressed in the predicate calculus. A proof consists of showing
that these two forms are equivalent in that for every given legal input, they describe the same
output. A complete proof of program correctness requires that each statement of the programming
language be precisely defined and all basic operations be proved correct. All these details may
cause a proof to be very much longer than the program.

3. How to analyze algorithms - This field of study is called analysis of algorithms. As an


algorithm is executed, it uses the computer's central processing unit (CPU) to perform operations
and its memory (both immediate and auxiliary) to hold the program and data. Analysis of
algorithms or performance analysis refers to the task of determining how much computing time and
storage an algorithm requires. This is a challenging area which sometimes requires great
mathematical skill. An important result of this study is that it allows to make quantitative judgments
about the value of one algorithm over another. Another result is that it allows to predict whether the
software will meet any efficiency constraints that exist. Questions such as how well does an
algorithm performs in the best case, in the worst case, or on the average are typical.

4. How to test a program – Testing a program consists of two phases: Debugging and profiling
(or performance measurement).Debugging is the process of executing programs on sample data sets
to determine whether faulty results occur and, if so, to correct them. However, as E. Dijkstra has
pointed out, "debugging can only point to the presence f errors, but not to their absence". In cases
in which it cannot verify the correctness of output on sample data, the following strategy can be
employed:

Let more than one programmer develop programs for the same problem, and compare the outputs
produced by these programs. If the outputs match, then there is a good chance that they are correct.
A proof of correctness is much more valuable than a thousand tests (if that proof is correct), since it
guarantees that the program will work correctly for all possible inputs. Profiling or performance
measurement is the process of executing a correct program on data sets and measuring the time and
space it takes to compute the results. These timing figures are useful in that they may confirm a
previously done analysis and point out logical places to perform useful optimization.
Performance Analysis
There are many criteria upon which we can judge an algorithm. For instance:

1. Does it do what we want it to do?


2. Does it work correctly according to the original specification of the task?
3. Is there documentation that describe show to use it and how it works?
4. Are procedures created in such a way that they perform logical sub-functions?
5. Is the code readable?

These criteria are all vitally important when it comes to writing software, most especially for large
systems. There are other criteria for judging algorithms that have a more direct relationship to
performance. These have to do with their computing time and storage requirements.

Definition 1.2 [Space/Time complexity]: The space complexity of an algorithm is the amount of
memory it needs to run to completion. The time complexity of an algorithm is the amount of
computer time it needs to run to completion.

Performance evaluation can be loosely divided into two major phases: (1) a priori estimates and (2)
a posterior testing. We refer to these as performance analysis and performance measurement
respectively.

 Space Complexity

Algorithm abc computes a+ b+b*c+(a+ b-c)/(a+b)+4.0; Algorithm Sum computes


𝑛
𝑖=1
𝑎[𝑖] iteratively, where the a[i]'s are real numbers; and RSum is a recursive algorithm that
𝑛
computes 𝑖=1 a[i] .

float abc (float a, float b, float c)


{
return a + b + b * c+ (a + b -c)/(a+ b) + 4.0;
}

float Sum(float a[], int n)


{
float s = 0.0;
for(int i=1; i<=n; i++)
s += a[i];
return s;
}

Recursive function

float RSum(float a[], int n)


{
if (n <0) return (0.0);
else return RSum (a, n - 1)+ a[n];
}

The space needed by each of these algorithms is seen to be the sum of the following components:
1. A fixed part that is independent of the characteristics (e.g., number, size) of the inputs and
outputs. This part typically includes the instruction space (i.e., space for the code), space for
simple variables and fixed-size component variables (also called aggregate),space for constants,
and soon.

2. A variable part that consists of the space needed by component variables whose size is
dependent on the particular problem instance being solved, the space needed by referenced
variables (to the extent that this depends on instance characteristics and the recursion stack
space (in so far as this space depends on the instance characteristics).

The space requirement S (P) of any algorithm P may therefore be written as S(P)= c+Sp(instance
characteristics ) where c is a constant.

When analyzing the space complexity of an algorithm, we concentrate solely one estimating Sp
(instance characteristics). For any given problem we need first to determine which instance
characteristics to use to measure the space requirements. This is very problem specific, and we
resort to examples to illustrate the various possibilities. Generally speaking our choices are limited
to quantities related to the number and magnitude of the inputs to and outputs from the algorithm.
At times, more complex measures of the interrelationship among the data items are used.

 Time Complexity
The time T(P) taken by a program P is the sum of the compile time and the run (or execution)time.
The compile time does not depend on the Instance characteristics. Also, we may assume that a
compiled program will be run several times without recompilation. Consequently, we concern
ourselves with just the run time of a program. This run time is denoted by tP (instance
characteristics).

Because many of the factors tP depends on are not known at the time a program is conceived it, is
reasonable to attempt only to estimate tP. If we knew the characteristics of the compile to be used,
we could proceed to determine the number of additions, subtractions, multiplications, divisions,
compares, loads, stores, and soon, that would be made by the code for P. So, we could obtain an
expression for tP(n) of the form

tP(n) = caADD(n) + csSUB(n) + cmMUL(n)+ cdDIV(n) + ...

where n denotes the instance characteristics, and ca, cs, cm, cd, and so on, respectively, denote the
time needed for an addition, subtraction, multiplication, division, and so on, and ADD, SUB, MUL,
DIV, and so on, are functions whose values are the numbers of additions, subtractions,
multiplications, divisions, and so on, that are performed when the code for P is used on an instance
with characteristic n.

So, the time complexity is the number of operations an algorithm performs to complete its task
(considering that each operation takes the same amount of time). The algorithm that performs the
task in the smallest number of operations is considered the most efficient one in terms of the time
complexity. However, the space and time complexity are also affected by factors such as the
operating system and hardware.
Example:

Compare two different algorithms, which are used to solve a particular problem, the problem is
searching an element in an array (the array is sorted in ascending order). To solve this problem two
algorithms are used:

1. Linear Search.
2. Binary Search.

The array contains ten elements, and to find the number 10 in the array.

const int array [] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};

const int search_digit = 10;

Linear search algorithm will compare each element of the array to the search_digit. When it finds
the search_digit in the array, it will return true. Now let’s count the number of operations it
performs. Here, the answer is 10 (since it compares every element of the array). So, Linear search
uses ten operations to find the given element. These are the maximum number of operations for this
array; in the case of Linear search, this is also known as the worst case of an algorithm.

Binary search algorithm first compares search_digit with the middle element of the array, that is
5. Now since 5 is less than 10, then we will start looking for the search_digit in the array elements
greater than 5, in the same way until we get the desired element 10.

While apply this logic now try to count the number of operations binary search took to find the desired
element. It took approximately four operations. Now, this was the worst case for binary search. This shows
that there is a logarithmic relation between the number of operations performed and the total size of the
array.

Number of operations = log (10) = 4 (approx) for base 2.

We can generalize this result for Binary search as, for an array of size n, the number of operations performed
by the Binary Search is: log(n)

The Big O Notation


An array of size n, Linear search will perform n operations to complete the search. On the other hand, Binary
search performed log(n) number of operations (both for their worst cases). We can represent this as a graph
(x-axis: number of elements, y-axis: number of operations).
It is quite clear from the figure that the rate by which the complexity increases for Linear search is
much faster than that for binary search. When we analyse an algorithm, we use a notation to
represent its time complexity and that notation is Big O notation.

For Example: time complexity for Linear search can be represented as O(n) and O(log n) for
Binary search (where, n and log(n) are the number of operations). The Time complexity or Big O
notations for some popular algorithms are listed below:

1. Binary Search: O(log n)


2. Linear Search: O(n)
3. Quick Sort: O(n * log n)
4. Selection Sort: O(n * n)
5. Travelling Salesperson: O(n!)

Best, Worst, and Average-Case Complexity


Using the RAM model of computation, we can count how many steps our algorithm will take on
any given input instance by simply executing it on the given input. However, to really understand
how good or bad an algorithm is, we must know how it works over all instances.

To understand the notions of the best, worst, and average-case complexity, one must think about
running an algorithm on all possible instances of data that can be fed to it. For the problem of
sorting, the set of possible input instances consists of all the possible arrangements of all the
possible numbers of keys. We can represent every input instance as a point on a graph, where the x-
axis is the size of the problem (for sorting, the number of items to sort) and the y-axis is the number
of steps taken by the algorithm on this instance. Here we assume, quite reasonably, that it doesn't
matter what the values of the keys are, just how many of them there are and how they are ordered.
(Best, worst, and average-case complexity)

As shown in figure, these points naturally align themselves into columns, because only
integers represent possible input sizes. Once we have these points, we can define three
different functions over them:

 The worst-case complexity of the algorithm is the function defined by the maximum
number of steps taken on any instance of size n. It represents the curve passing
through the highest point of each column.
 The best-case complexity of the algorithm is the function defined by the minimum
number of steps taken on any instance of size n. It represents the curve passing
through the lowest point of each column.

 Finally, the average-case complexity of the algorithm is the function defined by the
average number of steps taken on any instance of size n.

∞ ∞ ∞
Unit II - DIVIDE-AND-CONQUER
General Method
Divide and conquer is a design strategy which is well known to breaking down efficiency
barriers. When the method applies, it often leads to a large improvement in time complexity.
Divide-and-conquer algorithms work according to the following general plan.

1. Divide: Divide the problem into a number of smaller sub-problems ideally of about the same size.

2. Conquer: The smaller sub-problems are solved, typically recursively. If the sub-problem sizes are
small enough, just solve the sub-problems in a straight forward manner.

3. Combine: If necessary, the solution obtained the smaller problems are connected to get the solution
to the original problem. The following figure shows

Given a function to compute on n inputs the divide-and-conquer strategy suggests splitting the
inputs into k distinct subsets, 1< k < n, yielding k sub problems. These sub problems must be
solved, and then a method must be found to combine sub solutions into a solution of the whole. If
the sub problems are still relatively large, then the divide-and-conquer strategy can possibly be
reapplied. Often the sub problems resulting from a divide-and conquer design are of the same type
as the original problem. For those cases the reapplication of the divide-and-conquer principle is
naturally expressed by a recursive algorithm.

Example : Detecting Counterfeit Coin

A bag with n 16 coins and one of these coins may be counterfeit. Counterfeit coins are lighter than
genuine ones. The task is to determine whether the bag contains a counterfeit coin. It has a
machine that compares the weights of two sets of coins and tells which set is lighter or whether
both sets have the same weight.
We can compare the weights of coins 1 and 2. If coin 1 is lighter than coin 2, then coin 1 is
counterfeit and we are done with our task. If coin 2 is lighter than coin 1, then coin 2 is
counterfeit. If both coins have the same weight, we compare coins 3 and 4. Proceeding in the way,
we can determine whether the bag contains a counterfeit coin by making at most eight weight
comparisons. This process also identifies the counterfeit coin.
Another approach is to use the divide-and-conquer methodology. Suppose that our 16-coin
instance is considered a large instance. In step 1, we divide the original instance into two or more
smaller instances. Let us divide the 16-coin instance into two 8-coin instances by arbitrarily
selecting 8 coins for the first instance (say A) and the remaining 8 coins eight coins for the second
instance B. In step 2 we need to determine whether A or B has a counterfeit coin. For this step we
use our machine to compare the weights of the coin sets A and B. If both sets have the same
weight, a counterfeit coin is not present in the 16-coin set. If A and B have different weights, a
counterfeit coin is present and it is in the lighter set.

To be more precise, suppose we consider the divide-and-conquer strategy when it splits the input
into two sub problems of the same kind as the original problem. This splitting is typical of many
of the problems we examine here. We can write a control abstraction that mirrors the way an
algorithm based on divide-and-conquer will look .By a control abstraction we mean a procedure
whose flow of control is clear but whose primary operations are specified by other procedures
whose precise meanings are left undefined. DAndC (algorithm below) is initially invoked as
DAndC (P),where P is the problem to be solved.

Type DAndC(P)
{
if Small(P)
return S(P);
else
{
divide P into smallerinstancesPi,P2,..Pk, k ≥ 1;

Apply DAndC to each of these sub problems;

return Combine(DAndC(P1),DAndC(P2), .... , DAndC(Pk));

Small(P) is a Boolean-valued function that determines whether the input Size is small enough that
the answer can be computed without splitting. If this is so, the function S is invoked. Otherwise
the problem P is divided into smaller sub problems. These sub problems P1, P2 .... Pk are solved by
recursive applications of DAndC. Combine is a function that determines the Solution to P using
the solutions to the k sub problems. If the size of P is n and the sizes of the k sub problems are n1,
n2, ..., nk respectively, then the computing time of DAndC is described by the recurrence relation,

Where, T(n) is the time for DAndC on any input of size n


g(n) is the time to compute the answer directly for small inputs.
The function f(n) is the time for dividing P and combining the solutions to sub problems.

For divide-and-conquer-based algorithms that produce sub problems of the same type as the
original problem, it is very natural to first describe such algorithms using recursion.
The complexity of many divide-and-conquer algorithms is given by recurrences of the form,

Where a and b are known constants. We assume that T(1)is known and n is a power of b (i.e. ,
n= bk). One of the methods for solving any such recurrence relation is called the substitution
method.

Binary Search
Binary search is an efficient searching technique that works with only sorted lists. So the list must
be sorted before using the binary search method. Binary search is based on divide-and-conquer
technique.
The method starts with looking at the middle element of the list. If it matches with the key
element, then search is complete. Otherwise, the key element may be in the first half or second
half of the list. If the key element is less than the middle element, then the search continues with
the first half of the list. If the key element is greater than the middle element, then the search
continues with the second half of the list. This process continues until the key element is found or
the search fails indicating that the key is not there in the list.
Let ai, 1<i <n, be a list of elements that are sorted in non-decreasing order. Consider the problem
of determining whether a given element x is present in the list. If x is present, we are to determine
a value j such that aj = x. If x is not in the list, then j is to be set to zero. Let P = (n, ai .. -
al ,x) denote an arbitrary instance of this search problem (n is the number of elements in the list,
ai, ... , al is the list of elements, and x is the element searched for).

Divide-and-conquer can be used to solve this problem. Let Small (P) be true if n = 1. In this case,
S(P)will take the value i if x = ai, otherwise it will take the value 0. If P has more than one
element, it can be divided (or reduced) into a new sub problem as follows. Pick an index q (in the
range [i,l] and compare x with aq. If q is always chosen such that aq is the middle element (that
is, q = Ḻ(n+1)/2˩), then the resulting search algorithm is known as binary search. There are three
possibilities:

(1) x = aq : In this case the problem P is immediately solved.


(2) x < aq: In this case x has to be searched for only in the sub list a2, aj+i, ... , a q-1.
Therefore, P reduces to (q – i, ai, ... , aq-1,x).
(3) x > aq: In this case the sub list to be searched is aq+1, ... , al, x).

Example 1:
Consider the list of elements: -4, -1, 0, 5, 10, 18, 32, 33, 98, 147, 154, 198, 250, 500. Trace the
binary search algorithm searching for the element -1.
Example 2

List of elements are,

-15, -6, 0, 7, 9, 23, 54, 82, 101, 112, 125, 131, 142, 151

placed in a[l:14], and simulate the steps that BinSearch goes through as it searches for different
values of x. Only the variables low, high, and mid need to be traced as we simulate the algorithm.
We try the following values for x: 151, -14, and 9 for two successful searches and one
unsuccessful search. The following entry shows the traces of BinSearch on these three inputs.

Advantages of Binary Search

The main advantage of binary search is that it is faster than sequential (linear) search. Because it
takes fewer comparisons, to determine whether the given key is in the list, then the linear search
method.

Disadvantages of Binary Search

The disadvantage of binary search is that can be applied to only a sorted list of elements. The
binary search is unsuccessful if the list is unsorted.

Efficiency of Binary Search

To evaluate binary search, count the number of comparisons in the best case, average case, and
worst case.
Program - Recursive Binary Search
int BinSrch ( Type a[], int i, int l, Type x)
// Given an array a[i :l] of elements in non-decreasing
// order, 1<=i <=l, determine whether x is present, and
// if so, return j such that x = = a[j]; else return 0.
{
if (l = = i)
{// if Small(P)
if (x = a[i]) return i;
else return 0;
}
else
{// ReduceP into a smaller sub-problem.
Int mid = (i+l)/2;
if (x = = a[mid]) return mid;
else if (x < a[mid]) return BinSrch(a, i, mid-1,x);
else return BinSrch(a,mid+1,l,x);
}

Program – Iterative Binary Search


int BinSrch ( Type a[], int n, Type x)
// Given an array a[i :n] of elements in non-decreasing
// order, n>=0, determine whether x is present, and
// if so, return j such that x = = a[j]; else return 0.
{
int low = 1, high = n;
while (low <= high) {
{
int mid = (low + high)/2;
if (x < a[mid]) high = mid - 1;
else if (x > a[mid]) low =mid + 1;
else return (mid);
}
return 0;
}
Finding the Maximum and Minimum
The problem is to find the maximum and minimum items in a set of n elements. In analyzing the
time complexity of this algorithm, concentrate on the number of element comparisons. The
justification for this is that, the frequency count for other operations in this algorithm is of the
same order as that for element comparisons. More importantly, when the elements in a[1:n] are
polynomials, vectors, very large numbers, or strings of characters, the cost of an element
comparison is much higher than the cost of the other operation.

Straight forward Minimum and Maximum Algorithm

void StraightMaxMin(Type a[], int n, Type & max, Type & min)
// Set max to the maximum and min to the minimum of a[l:n]
{
max = min = a[l];
for( int i = 2; i <= n; i++)
{
if (a[i]>= max) max =a[i];
if (a[i]<= min) min =a[i];
}
}

StraightMaxMin requires 2(n - 1) element comparisons in the best, average, and worst cases. An
immediate improvement is possible by realizing that the comparison a[i] < min is necessary only
when a[i] > max is false. Hence we can replace the contents of the for loop by

if( a[i] > max max = a[i];

else if ( a[i] < min) min = a[i];

Now the best case occurs when the elements are in increasing order. The number of element
comparisons is n - 1. The worst case occurs when the elements are in decreasing order. In this case
the number of element comparison is s 2(n - 1).The average number of element comparisons is
less than 2(n - 1). On the average, a[i] is greater than max half the time, and so the average
number of comparisons is 3n/2 – 1.

A divide-and-conquer algorithm for this problem would proceed as follows: Let P = (n,a[i], ...,
a[j]) denote an arbitrary instance of the problem. Here n is the number of elements in the list a[i],
..., a[j] and we are interested in finding the maximum and minimum of this list. Let Small(P) be
true when n ≤ 2. In this case, the maximum and minimum are a[i] if n = 1. If n = 2, the problem
can be solved by making one comparison.

If the list has more than two elements, P has to be divided into smaller instances. For example, we
might divide P into the two instances P1 = (∟n/2˩, a[l],...,a[∟n/2˩) and P2 = (n - ∟n/2˩,
a[∟n/2˩ + 1], ..., a[n). After having divided P into two smaller sub problems it can solve by
recursively invoking the same divide-and-conquer algorithm. If MAX(P) and MIN(P) are the
maximum and minimum of the elements in P, then MAX(P) is the larger of MAX(P1) and
MAX(P2). Also, MIN(P) is the smaller of MIN(P1)and MIN(P2).
The following algorithm is recursively finding the maximum and minimum.

1 void MaxMin (int i, int j, Type & max, Type & min)
2 // a[1:n] is a global array. Parameters i and j are integers,
3 // 1 <= i <= j <=n. The effect is to set max and min to the
4 // largest and smallest values in a[i :j],respectively.
5 {
6 if (i = = j) max = min = a[i]; // Small(P)
7 else if (i = = j - 1) { // Another case of Small(P)
8 if (a[i] < a[j]) { max = a[j]; min =a[i]; }
9 else { max = a[i]; min:=a[j]; }
10 }
11 else { // if Pis not small, divide P into sub-problems.
12 // Find where to split the set.
13 int mid = (i+j)/2; Tpe max1, min1;
14 // Solve the sub-problems.
15 MaxMin (i, mid, max, min);
16 MaxMin (mid+l, j, maxl, minl);
17 // Combine the solutions.
18 if (max < max1) max = maxl;
19 if (min > min1) min = mini;
20 }
21 }

Trees of Recursive Calls of MaxMin


Merge Sort
Merge sort is based on divide-and-conquer technique. Merge sort method is a two phase process,

1. Dividing

2. Merging

Dividing Phase: During the dividing phase, each time the given list of elements is divided into
two parts. This division process continues until the list is small enough to divide.

Merging Phase: Merging is the process of combining two sorted lists, so that, the resultant list is
also the sorted one. Suppose A is a sorted list with n1 element and B is a sorted list with n2
elements. The operation that combines the elements of A and B into a single sorted list C with
n=n1 + n2, elements is called merging.

A sorting algorithm that has the nice property that in the worst case its complexity is O (n log n).
This algorithm is called merge sort. We assume throughout that the elements are to be sorted in
non-decreasing order. Given a sequence of n elements (also called keys) a[1], ..., a[n], the general
idea is to imagine them split into two sets a[1], ... , a[∟n/2˩] and a[∟n/2˩+1], ... , a[n]. Each set is
individually sorted, and the resulting sorted sequences are merged to produce a single sorted
sequence of n elements. Thus we have another ideal Example of the divide-and-conquer strategy
in which the splitting is into two equal-sized sets and the combining operation is the merging of
two sorted sets into one.

Merge Sort - Algorithm

void MergeSort (int low, int high)


// a[low :high] is a global array to be sorted.
// Small(P) is true if there is only one element
// to sort. In this case the list is already sorted.
{
if (low < high) { //If there are more than one element
// Divide P into sub-problems.
// Find where to split the set.
int mid = (low + high)/2;
// Solve the sub-problems.
MergeSort (low, mid);
MergeSort (mid + 1, high);
// Combinethe solutions.
Merge(low, mid, high);

}
Merging two Sorted Sub-arrays using Auxiliary Storage

void Merge( int low, int mid, int high)


// a[low :high] is a global array containing two sorted
// subsets in a [low :mid] and in a[mid+ 1: high]. The goal
//is to merge these two sets into a single set residing in a[low : high].
// b[ ] is an auxiliary global array.
{
int h =low; i = low; j = mid + 1, k;
while ((h <= mid) && (j <= high)) {
if (a[h] <= a[j]) {b[i] = =a[h]; h = h + 1; }
else {b[i] = a[j]j; j++;} i++;

} //while

if (h > mid) for( k = j; k <= high; k++) {


b[i] =a[k]; i++;
}
else for( k = h; k <= mid; k++) {
b[i] =a[k]; i++;
}
for( k = low; k <= high; k++ ) a[k] = b[k];
}

Example:
Tree of Calls of MergeSort(1l0, )
Quick Sort

The function Partition in the algorithm accomplishes an in-place partitioning of the elements of
a[m:p - 1]. It is assumed that a[p] ≥ a[m] and that a[m] is the partitioning element. If m = 1 and p -
1 = n, then a[n + 1] must be defined and must be greater than or equal to all elements in a[1:n].
The assumption that a[m] is the partition element is merely for convenience; other choices for the
partitioning element than the first item in the set are better in practice. The function Interchange
(a, i, j) exchanges a[i] with a[j].

int Partion (Type a[], int m, int p)


// Within a[m], a[m+1], ... , a[p-1] the elements are
// rearranged in such a manner that if initially t = = a[m],
// then after completion a[q] = = t for some q between m and p - 1, a[k] <= t
//for m <= k < q, and a[k] >= t for q <k <p. q is returned.
{
Type v = a[m]; int i = m; j=p;
do {
do i++
while (a[i] < v);
do j--;
while (a[j] > v);
if (i < j) Interchange (a, i, j);
} while (i < j);
a[m] = a[j]; a[j] =v; return(j);
}

inline void Interchange(Type a[], int i, int j)


// Exchange a[i] with a[j].
{
Type p = a[i];
a[i] = a[j; a[j] = p;
}
Example:

to the elements in S2. Hence S1 and S2 can be sorted independently. Each set is sorted by reusing
the function Partition. The following algorithm describes the complete process.

void QuickSort (int p, int q)


// Sorts the elements alp], ..., a[q] which reside in the global
// array a[1:n] into ascending order; a[n+ 1] is considered to
// be defined and must be >= all the elements in a[l:n].
{
if (p < q) { // If there are more than one element
// divide P into two sub-problems.
int j = Partition (a, p, q+ 1);
// j is the position of the partitioning element.
// Solve the sub-problems.
QuickSort (p, j-1);
QuickSort(j+ l, q);
// There is no need for combining solutions.
}
}
Performance Measurement

Table3.5 Average computing times for two sorting algorithms on random inputs

Table3.6 Worst-case computing times for two sorting algorithms on random inputs

Scanning the tables, we immediately see that QuickSort is faster than MergeSort for all values.
Even though both algorithms require O(n log n) time on the average, QuickSort usually performs
well in practice.

Selection
The Partition algorithm can also be used to obtain an efficient Solution for the selection problem.
In this problem, we are given n elements a[1 : n] and are required to determine the kth-smallest
element. If the Partitioning element v is positioned at a[j], then j-1 elements are less than or equal
to a[j] and n- j elements are greater than or equal to a[j]. Hence if k < j, then the kth-smallest
element is in a[l: j - 1]; if k = j, then a[j] is the kth-smallest element; and if k >j, then the kth-
smallest element is the (k - j)th-smallest element in a[j + 1: n]. The resulting algorithm is function
Selectl below. This function places the kth-smallest element into position a[k] and partitions the
remaining elements so that a[i] ≤ a[k], 1≤ i < k, and a[i] ≥a[k], k < i ≤ n.

void Selectl (Type a[], int a, int k)


// Selects the kth-smallest element in a[l:n] and places it in the kth position of a[ ]
// The remaining elements are rearranged such that a[m] <= a[k] for 1<= m < k, and
// a[m] >= a[k] for k <m <=n.
{
int low =1; up = n + 1;
a[n+ 1]= INFTY; // a[n+ 1]is set to infinity.
do { // Each time the loop is entered, 1<= low <=k <=up <= n+ 1.
int j :=Partition(a, low, up);
//j is such that a[j] is the jth-smallest value in a[ ].
if (k = = j) return;
else if (k < j) up =j; // j is the new upper limit.
else low = j + 1; // j + 1is the new lower limit.
}while(TRUE);
}

Example

The array has the nine elements 65, 70, 75, 80, 85, 60, 55, 50, and 45, with a[10] = ∞. If k = 5,
then the first call of Partition will be sufficient since 65 is placed into a[5]. Instead, assume that
we are looking for the seventh-smallest element of a, that is, k = 7. The next invocation of
Partition is Partition (6, 10).
Strassen's Matrix Multiplication
Where a and b are constants.

∞∞∞-∞∞∞-∞∞∞
Big O notation

Big O notation tells the number of operations an algorithm will make. It gets its name from the
literal "Big O" in front of the estimated number of operations. Big-O notation represents the upper
bound of the running time of an algorithm. Thus, it gives the worst-case complexity of an
algorithm.

O(g(n)) = { f(n): there exist positive constants c and n0

such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

This expression can be described as a function f(n) belongs to the set O(g(n)) if there exists a
positive constant c such that it lies between 0 and cg(n), for sufficiently large n.

For any value of n, the running time of an algorithm does not cross the time provided by O(g(n)).

Since it gives the worst-case running time of an algorithm, it is widely used to analyze an
algorithm in the worst-case scenario.

Here are some common algorithms and their run times in Big O notation:

BIG O EXAMPLE
NOTATION ALGORITHM
O(log n) Binary search
O(n) Simple search
O(n * log n) Quick sort
O(n2) Selection sort
Travelling
O(n!) salesperson
Omega Notation (Ω-notation)

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides
the best case complexity of an algorithm.

Ω(g(n)) = { f(n): there exist positive constants c and n0

such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

This expression can be described as a function f(n) belongs to the set Ω(g(n)) if there exists a
positive constant c such that it lies above cg(n), for sufficiently large n.

For any value of n, the minimum time required by the algorithm is given by Omega Ω(g(n))

Theta Notation (Θ-notation)

Theta notation encloses the function from above and below. Since it represents the upper and the
lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.
For a function g(n), Θ(g(n)) is given by the relation:

Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0

such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }

This expression can be described as a function f(n) belongs to the set Θ(g(n)) if there exist positive
constants c1 and c2 such that it can be sandwiched between c1g(n) and c2g(n), for sufficiently
large n.

If a function f(n) lies anywhere in between c1g(n) and c2g(n) for all n ≥ n0, then f(n) is said to be
asymptotically tight bound.
UNIT III – Greedy Algorithm

The General Method

SolType Greedy (Type a[], int n)


// a[l:n] contains the n inputs.
{
SolType solution = EMPTY; // Initialize the solution.
for (int i =1; i <= n; i++)
{
Type x = Select(a);
if Feasible (solution, x)
solution = Union (solution,x)
}
return solution;

(Greedy method control abstraction for the subset paradigm)

For problems that do not call for the selection of an optimal subset, in the greedy method we
make decisions by considering the inputs in some order. Each decision is made using an
optimization criterion that can be computed Using decisions already made. Call this version
of the greedy method the ordering paradigm.
KNAPSACK PROBLEM
void GreedyKnapsack ( float m, int n)
// p[1:n] and w[1:n] contain the profits and weights
// respectively of the n objects ordered such that
//p[i] / w[i] >= p[i++1] / w[i+1]. m is the knapsack
// size and x[1:n] is the solution vector.
{
for (int i=1; i <= n; i++) x[i] = 0.0; //initialize x
float U = m;
for ( i=1; i <= n; i++)
{ if( w[i] > U) break;
x[i] =1.0;
U -= w[i];
}
if (i <= n) x[i] = U / w[i];
}

Minimum-Cost Spanning Trees


 Prim's Algorithm

.
1 float Prim (int E[][SIZE], float cost [] [SIZE], int n, int t[] [2])
2 // E is the set of edges in G. cost[1:n] [1:n] is the cost
3 // adjacency matrix of an n vertex graph such that cost[i] [j] is
4 // either a positive real number or infinity if no edge (i,j) exists.
5 //A minimum spanning tree is computed and stored as a set of
7 // edges in the array t[1:n-1] [1:2].
8 // (t[il] [1], t[i] [2]) is an edge in
9 // the minimum-cost spanning tree.
10 // The final cost is returned.
11 {
12 int near [SIZE], j, k, l;
13 let (k, l)be an edge of minimum cost in E;
14 float mincost = cost[k][l];
15 t[1] [1] = k; t[1] [2] = l;
16 for ( int i = 1; i <= n; i++) // Initialize near.
17 if (cost[i, l] < cost[i] [k]) near [i] = l;
18 else near[i]= k;
19 near[k] = near[l] = 0;
20 for ( i = 2; i <= n-1; i++) { // Find n-2 additional
21 //edges for t.
22 let j be an index such that near[j] != 0 and
23 cost [j] [near[j]] is minimum;
24 t[i] [1] = j; t[i] [2] = near[j];
25 mincost = mincost + cost [j] [near[j]];
26 near[j] = 0;
27 for ( k =1 k <= n; k++) // Update near[].
28 if ((near[k] != 0) &&
29 (cost[k] [near[k]] > cost[k] [j]))
30 near[k] = j;
31 }
32 return (mincost);
33 } (Prim's minimum-costs panning tree algorithm)
 Kruskal's Algorithm

Example:
Consider the following graph.

Figures below show the stages in Kruskal's algorithm.


The resulting tree has cost 99.
1 t: = EMPTY;
2 while((t has fewer than n-1 edges) && (E != EMPTY)) {
3 choose an edge (v,w) from E of lowest cost;
4 delete (v, w) from E;
5 if (v,w) does not create a cycle in t;
6 add (v,w) to t;
7 else discard (v,w);

8 }

(Early form of minimum-cost spanning tree algorithm due to Kruskal)

1 float Kruskal (int E[] [SIZE], float cost [] [SIZE],


2 int n, int t[] [2])
3 // E is the set of edges in G. G has n vertices.
4 // cost[u] [v], is the cost of edge (u,v). t is
5 // the set of edges in the minimum-cost
6 // spanning tree. The final cost is returned.
7 {
8 int parent [SIZE];
9 construct a heap out of the edge costs
10 using Heapify;
11 for (int i=1; i <=n; i++) parent[i] = -1;
12 // Each vertex is in a different set.
13 i = 0; float mincost = 0.0;
14 while ((i < n-1) && (heap not empty)) {
15 delete a minimum cost edge(u,v) from the heap and
16 reheapify using Adjust;
17 int j := Find(u); int k = Find(v);
18 if (j != k) {
19 i ++;
20 t[i] [1]= u; t[i][2] = v;
21 mincost += cost[u] [v];
22 Union(j ,k);
23 }
24 }
25 if (i != n- 1) cout << "No spanning tree” << endl;
26 else return (mincost);
27 }

(Kruskal’s algorithm)
UNIT IV – Dynamic Programming

The General Method

1
Multistage Graphs

2
(Four-stage graph corresponding to a three-project problem)

3
4
1 void FGraph(graph G, int k, int n, int p[])
2 // The input in a k-stage graph G = (V, E) with n
3 //vertices indexed in order of stages. E is a set
4 //of edges and c[i,j] is the cost of <i,j>.
5 // p[l:k] is a minimum-cost path.
6 {
7 float cost[MAXSIZE]; int d[MAXSIZE], r;
8 cost[n] = 0.0;
9 for (int j=n – 1; j>=1; j--) {// Compute cost[j].
10 let r be a vertex such that <j,r> is an edge
11 of G and c[j][r] + cost[r] is minimum;
12 cost[j] = c[j][r] + cost[r]
13 d[j] = r;
14 }
15 // Find a minimum-cost path.
16 p[l]=1; p[k] = n;
17 for( j =2; j <= k-1; j++) p[j] = d[p[j-1]];
18 }

(Multistage graph pseudocode corresponding to the forward approach)

All-Pairs Shortest Paths

5
6
(Graph with negative cycle)

1 void AllPaths(float cost[] [SIZE], float A[] [SIZE], int n)


2 // cost[1:n] [1:n] is the cost adjacency matrix of
3 // a graph with vertices ; A[i][ j]is the cost of
4 //a shortest path from vertex i to vertex j.
5 // cost[i][i] = 0.0 for 1≤i ≤n.
6 {
7 for( i = 1; i<=n;i++)
8 for(int j =1; j <=n; j++)
9 A[i][j]=cost[i][j]; //Copy cost into A.
10 for(int k =1; k<=n; k++)
11 for( i =1; i<=n; i++)
12 for( j =1; j<=n; j++)
13 A[i][j] = min(A[i][j], A[i][k] + A[k][j]);

14 }

(Function to compute lengths of shortest paths)

7
(Directed graph and associated matrices)

Single-source Shortest Paths:


General Weights

8
(Directed graph with a negative-length edge)

9
(Shortest paths with negative edge lengths)

The pseudocode of the following program computes the length of the shortest path
from vertex v to each other vertex of the graph. This algorithm is referred to as the
Bellman and Ford algorithm.

1 void BellmanFord (int v, float cost[][SIZE], float dist[], const int n)


2 // Single-source/all-destination shortest paths
3 //with negative edge costs
4 /********/
5 {
6 for (int i = 1; i<= n; i++) // Initialize dist.
7 dist[i] = cost[v][i];
8 for (int k =2; k<=n-1; k++)
10
9 for (each u such that u != v and u has
10 at least one incoming edge)
11 for (each <i,u> in the graph
12 if dist[u] > dist[i] + cost[i] [u])
13 dist[u]= dist[i]+cost[i][u];`````````````````````
14 }

0/1 Knapsack Problem

11
12
Algorithm for 0/1 Knapsack Problem:
struct PW {float p, w; };
1 void DKnap (float p[], float w[], int x, int n, float m)
2 {
3 struct PW pair[SIZE]; int b[MAXSIZE], next;
4 b[0]=1; pair[l].p = pair[l].w =0.0; //S0
5 int t=l; h=1; // Start and end of S0
6 b[l] = next = 2; // Next free spot in pair[]
7 for (int i =1; i<=n-1; i++) {// Generate Si.
8 int k=t;
9 int u = Largest(pair, w, t, h, i, m);
10 for (int j =t ; j<=u; j++) { // Generate S1i-1 and merge.
11 float pp = pair[j].p + p[i]; float ww =pair[j].w + w[i];
12 // (pp,ww) is the next element in S1i-1.
13 while((k <= h) && (pair[k].w <= ww)) {
14 pair[next].p = pair[k].p; pair[next].w = pair[k].w;

13
15 next++; k++;
16 }
17 if ((k<= h) && ((pair[k].w ==ww)) {
18 if( pp < pair[k].p) pp =pair[k].p; k++;
19 }
20 if ( pp > pair[next - l].p) {
21 pair[next].p=pp; pair[next].w = ww; next++;
22 }
23 while ((k <= h) && (pair[k].p<= pair[nex - l].p))
24 k++;
25 }
26 // Merge in remaining terms from Si-1
27 while (k <= h) {
28 pair[next].p = pair[k].p; pair[next].w:=pair[k].w;
29 next++; k ++;
30 }
31 // Initialize for Si+1.
32 t = h + 1; h =next - 1;b[i + 1] = next;
33 }
34 TraceBack(p, w, pair, x, m, n);
35 }

14
The Travelling salesperson Problem

15
(Directed graph and edge length matrix c)

16
17
UNIT V – Basic Traversal and Search Techniques

Breadth First Search (BFS) and Traversal

1
1 void BFS (int v)
2 //A breadth first search of G is carried out beginning
3 // at vertex v. For any node i, visited[i] = = 1 if i has
4 // already been visited. The graph G and array visited []
5 // are global; visited[] is initialized to zero.
6 {
7 int u = v; Queue q (SIZE);
8 // q is a queue of unexplored vertices.
9 visited[v] = 1;
10 do {
11 for all vertices w adjacent from u {
12 if (visited[w] = = 0) {
13 q.AddQ (w); // w is unexplored.
14 visited[w] = 1;
15 }
16 }
17 if (q.Qempty ()) return; // No unexplored vertex.
18 q. DeleteQ (u); // Get first unexplored vertex.
19 } while(1);
20 }

(Psuedocode for breadth first search)

void BFT (struct treenode G[], int n)


// Breadth first traversal of G
{
2
int i; Boolean visited [SIZE]
for (i =1 i<=n; i++) // Mark all vertices unvisited.
visited[i] = 0;
for (i = 1; i<=n; i++)
if (!visited[i]) BFS(i);
}

(Breadth first graph traversal)

Depth First Search (DFS) and Traversal

1 void DFS (int v)


2 // Given an undirected (directed)graph G = (V,E) with
3 // n vertices and an array visited[] initially set
4 //to zero, this algorithm visits all vertices
5 // reachable from v. G and visited [] are global.
6 {
7 visited[v] = 1;
8 for each vertex w adjacent from v {
9 if ( !visited [w]) DFS(w);
10 }
(Depth first search of a graph)

3
Biconnected Components and DFS

4
It is relatively easy to show that:

1 for each articulation point a {


2 Let B1, B2, ...., Bk be the bi connected
3 components containing vertex a;
4 Let vi, vi ≠ a, be a vertex in Bi, 1 ≤ i ≤ k;
5 Add to G the edges (vi, vi+1), 1< i < k;
6 }

(Scheme to construct a bi-connected graph)

5
6
BACKTRACKING
The General Method

7
The 8-Queens Problem

8
(One solution to the 8-queensproblem)

Place(k,i) returns a boolean value that is true if the kth queen can be placed in column i. It
tests both whether i is distinct from all previous values x[1], ....... x[k-1] and alsowhether
there is no other queen on the same diagonal. Its computing time is O(k-1)

bool Place (int k, int i)


// Returns true if a queen can be placed in kth row and
// ith column. Otherwise it returns false. x[] is a
// global array whose first (k -1) values have been set.
// abs(r) returns the absolute value of r.
{
for int j =1; j < k; j++)
if ((x[j] == i) // Two in the same column
|| abs(x[j] - i) = = abs(j -k))) // or in the same diagonal
return (false);
return (true);
}

9
(Five walks through the 8-queens problem and its estimates)

Sum of Subsets

10
11
1 void SumOfSub( floats, int k, float r)
2 // Find all subsets of w[l :n] that sum to m. The values of x[j],
3 //1<= j< k have ready been determined.
4 {
5 // Generate left child. Note that s+w[k] <= m
6 // because Bk-1 is true.
7 x[k] =1;
8 if (s+w[k] == m) { // Subset found
9 for (int j=1; j<=k;; j++) cout << x[j] << ‘ ‘;
10 cout << endl;
11 }
12 // There is no recursive call here
13 // as w[j] > 0, 1 <= j <= n.
14 else if (s + w[k] +w[k+ 1] <=m)
15 SumOfSub (s +w[k], k+1, r-w[k]);
16 // Generate right child and evaluate Bk.
17 if ((s + r-w[k] => m) and (s + w[k+1] <= m))
18 { x[k] = 0;
19 SumOfSub (s, k + l, r-w[k]);
20 } }

Example:

12
(Portion of state space tree generated by SumOfSub)

Graph Coloring

13
(An example graph and its coloring)

(A map and its planar graph representation)

14
void mColoring (int k)
// This algorithm was formed using the recursive backtracking
// schema. The graph is represented by its Boolean adjacency
// matrix G[l:n, 1:n]. All assignments of 1,2, ...., m to the
// vertices of the graph such that adjacent vertices are
// assigned distinct integers are printed, k is the index
// of the next vertex to color.
{
do {//Generate all legal assignments for x[k].
NextValue(k);// Assign to x[k] a legal color.
if (!x[k]) break; // No new color possible
if (k = = n) { // At most m colors have been used
// used to color the n vertices.
for( int i=1; i<= n; i++) cout << x[1] << ‘ ‘;
}
else mColoring( k + 1);
}while (1);
}
The function NextValue() produces the possible colors for xk after x1 through xk-1 hence been
defined. The main loop of mColoring() repeatedly pics an element from the set of
possibilities assigns it to xk and then calls mcoloring recursively.

15
(State space treefor mColoring when n =3 and m = 3)

Hamiltonian Cycles

(Two graphs ,one containing a Hamiltonian cycle)

void NextValue( int k)


// x[l], ....., x[k-1] is a path of k - 1distinctb vertices. If x[k] == 0, then
// no vertex has as yet been assigned to x[k]. After execution,
16
// x[k]is assigned o the next highest numbered vertex which
// i) does not already appear in x[l], x[2], ..., x[k-1];and
// ii) is connected by an edge to x[k - 1]. Otherwise x[k] == 0. If k = = n, then
// in addition x[k] is connected to x[l].
{
do {
x[k] =(x[k] + 1) % (n + 1); // Next vertex.
if (!x[k]) return;
if (G[x[k-l]] [x[k]]) { // Is there an edge?
for( int j=1; j<k-1; j++) if (x[j] == x[k] break;
// Check for distinctness.
if (j= = k) // If true, then the vertex is distinct.
if (k<n) || ((k== n) && G[x[n]] [ x[l]]))
return;
}
} while (1);
}

void Hamiltonian( int k)


// This program uses the recursive formulation of
// backtracking to find all the Hamiltonian cycles
// of a graph. The graph is stored as an adjacency
// matrix G[l:n] [1:n]. All cycles begin at node1.
{
do { // Generate values for x[k].
NextValue(k);// Assign a legal next value to x[k].
if (!x[k] ) return;
if (k == n) {
for( int i=1; i <=n; i++) cout << x[i] << “ \n “;
}
else Hamiltonian (k+1);
} while (1);
}

* * *

17

You might also like