0% found this document useful (0 votes)
15 views

DAA - Greedy Method Dynamic Programming

Greedy Method: Knapsack problem, Minimum spanning trees, Single source shortest path, Job sequencing with deadlines, Optimal storage on tapes, Optimal merge pattern Dynamic programming method: All pairs shortest paths, Optimal binary search tress, 0/1 Knapsack problem, Reliability design, Traveling salesman problem.

Uploaded by

cse20733005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

DAA - Greedy Method Dynamic Programming

Greedy Method: Knapsack problem, Minimum spanning trees, Single source shortest path, Job sequencing with deadlines, Optimal storage on tapes, Optimal merge pattern Dynamic programming method: All pairs shortest paths, Optimal binary search tress, 0/1 Knapsack problem, Reliability design, Traveling salesman problem.

Uploaded by

cse20733005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Design and Analysis of Algorithms ( PC 602 CS)

LECTURE NOTES

UNIT–III

Greedy Method: Knapsack problem, Minimum spanning trees, Single source shortest path,
Job sequencing with deadlines, Optimal storage on tapes, Optimal merge pattern
Dynamic programming method: All pairs shortest paths, Optimal binary search tress, 0/1
Knapsack problem, Reliability design, Traveling salesman problem.

Greedy Method:

The simplest and straight forward approach is the Greedy method. In this approach, the decision is
taken on the basis of current available information without worrying about the effect of the current
decision in future.

Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an
immediate benefit. This approach never reconsiders the choices taken previously. This approach is
mainly used to solve optimization problems. Greedy method is easy to implement and quite efficient in
most of the cases.

Components of Greedy Algorithm:

Greedy algorithms have the following five components −

A candidate set − A solution is created from this set.

A selection function − Used to choose the best candidate to be added to the solution.

A feasibility function − Used to determine whether a candidate can be used to contribute to the
solution.

An objective function − Used to assign a value to a solution or a partial solution.

A solution function − Used to indicate whether a complete solution has been reached.

Knapsack:

The knapsack problem states that − given a set of items, holding weights and profit values, one must
determine the subset of the items to be added in a knapsack such that, the total weight of the items must
not exceed the limit of the knapsack and its total profit value is maximum.

It is one of the most popular problems that take greedy approach to be solved. It is called as the
Fractional Knapsack Problem.

1
Algorithm

1) Consider all the items with their weights and profits mentioned respectively.

2) Calculate Pi/Wi of all the items and sort the items in descending order based on their Pi/Wi values.

3) Without exceeding the limit, add the items into the knapsack.

4) If the knapsack can still store some weight, but the weights of other items exceed the limit, the
fractional part of the next time can be added.

5) Hence, giving it the name fractional knapsack problem.

Examples

For the given set of items and the knapsack capacity of 10 kg, find the subset of the items to be added
in the knapsack such that the profit is maximum.

Items 1 2 3 4 5
Weights (in 3 3 2 5 1
kg)
Profits 10 15 10 12 8
Solution

Step 1

Given, n = 5

Wi = {3, 3, 2, 5, 1}

Pi = {10, 15, 10, 12, 8}

Calculate Pi/Wi for all the items:

Items 1 2 3 4 5
Weights (in 3 3 2 5 1
kg)
Profits 10 15 10 20 8
Pi/Wi 3.3 5 5 4 8
Step 2
Arrange all the items in descending order based on Pi/Wi.
Items 5 2 3 4 1
Weights (in 1 3 2 5 3
kg)
Profits 8 15 10 20 10
Pi/Wi 8 5 5 4 3.3
Step 3

2
Without exceeding the knapsack capacity, insert the items in the knapsack with maximum profit.

Knapsack = {5, 2, 3}

However, the knapsack can still hold 4 kg weight, but the next item having 5 kg weight will exceed the
capacity. Therefore, only 4 kg weight of the 5 kg will be added in the knapsack

Items 5 2 3 4 1
Weights (in 1 3 2 5 3
kg)
Profits 8 15 10 20 10
Knapsack 1 1 1 4/5 0
Hence, the knapsack holds the weights = [(1 * 1) + (1 * 3) + (1 * 2) + (4/5 * 5)] = 10, with maximum
profit of [(1 * 8) + (1 * 15) + (1 * 10) + (4/5 * 20)] = 37.

Program:

#include <stdio.h>

int n = 5;

int p[10] = {3, 3, 2, 5, 1};

int w[10] = {10, 15, 10, 12, 8};

int W = 10;

int main(){

int cur_w;

float tot_v;

int i, maxi;

int used[10];

for (i = 0; i< n; ++i)

used[i] = 0;

cur_w = W;

while (cur_w> 0) {

maxi = -1;

for (i = 0; i< n; ++i)

if ((used[i] == 0) &&

3
((maxi == -1) || ((float)w[i]/p[i] > (float)w[maxi]/p[maxi])))

maxi = i;

used[maxi] = 1;

cur_w -= p[maxi];

tot_v += w[maxi];

if (cur_w>= 0)

printf("Added object %d (%d, %d) completely in the bag. Space left: %d.\n", maxi + 1, w[maxi],
p[maxi], cur_w);

else {

printf("Added %d%% (%d, %d) of object %d in the bag.\n", (int)((1 + (float)cur_w/p[maxi]) * 100),
w[maxi], p[maxi], maxi + 1);

tot_v -= w[maxi];

tot_v += (1 + (float)cur_w/p[maxi]) * w[maxi];

printf("Filled the bag with objects worth %.2f.\n", tot_v);

return 0;

Spanning Tree:

Given an undirected and connected graph G=(V,E), a spanning tree of the graph G is a tree that spans G
(that is, it includes every vertex of G ) and is a subgraph of G (every edge in the tree belongs to G )

Minimum Spanning Tree:

The cost of the spanning tree is the sum of the weights of all the edges in the tree. There can be many
spanning trees. Minimum spanning tree is the spanning tree where the cost is minimum among all the
spanning trees. There also can be many minimum spanning trees.

4
There are two famous algorithms for finding the Minimum Spanning Tree:.

>Prim’s Minimal Spanning Tree.

>Kruskal’s Minimal Spanning Tree.

Prim’s Minimal Spanning Tree:

This algorithm is one of the efficient methods to find the minimum spanning tree of a graph. A
minimum spanning tree is a subgraph that connects all the vertices present in the main graph with the
least possible edges and minimum cost (sum of the weights assigned to each edge).

The algorithm, similar to any shortest path algorithm, begins from a vertex that is set as a root and
walks through all the vertices in the graph by determining the least cost adjacent edges.

Prim’s Algorithm

To execute the prim’s algorithm, the inputs taken by the algorithm are the graph G {V, E}, where V is
the set of vertices and E is the set of edges, and the source vertex S. A minimum spanning tree of graph
G is obtained as an output.

Algorithm

Declare an array visited[] to store the visited vertices and firstly, add the arbitrary root, say S, to the visited
array.
Check whether the adjacent vertices of the last visited vertex are present in the visited[] array or not.
If the vertices are not in the visited[] array, compare the cost of edges and add the least cost edge to the output
spanning tree.
The adjacent unvisited vertex with the least cost edge is added into the visited[] array and the least cost edge is
added to the minimum spanning tree output.
Steps 2 and 4 are repeated for all the unvisited vertices in the graph to obtain the full minimum spanning tree
output for the given graph.
Calculate the cost of the minimum spanning tree obtained.

5
Examples

Find the minimum spanning tree using prim’s method (greedy approach) for the graph given below
with S as the arbitrary root.

Solution
Step 1
Create a visited array to store all the visited vertices into it.
V={}
The arbitrary root is mentioned to be S, so among all the edges that are connected to S we need to find the
least cost edge.
S→B=8
V = {S, B}

Step 2
Since B is the last visited, check for the least cost edge that is connected to the vertex B.
B→A=9
B → C = 16
B → E = 14
Hence, B → A is the edge added to the spanning tree.
V = {S, B, A}

6
Step 3
Since A is the last visited, check for the least cost edge that is connected to the vertex A.
A → C = 22
A→B=9
A → E = 11
But A → B is already in the spanning tree, check for the next least cost edge. Hence, A → E is added to the
spanning tree.
V = {S, B, A, E}

Step 4
Since E is the last visited, check for the least cost edge that is connected to the vertex E.
E → C = 18
E→D=3
Therefore, E → D is added to the spanning tree.
V = {S, B, A, E, D}

Step 5
Since D is the last visited, check for the least cost edge that is connected to the vertex D.
D → C = 15
E→D=3
Therefore, D → C is added to the spanning tree.

7
V = {S, B, A, E, D, C}

The minimum spanning tree is obtained with the minimum cost = 46

Program:
#include<stdio.h>
#include<stdlib.h>
#define inf 99999
#define MAX 10
int G[MAX][MAX] = {
{0, 19, 8},
{21, 0, 13},
{15, 18, 0}
};
int S[MAX][MAX], n;
int prims();
int main(){
int i, j, cost;
n = 3;
cost=prims();
printf("\nSpanning tree:\n");

for(i=0; i<n; i++) {


printf("\n");
for(j=0; j<n; j++)
printf("%d\t",S[i][j]);

}
printf("\n\nMinimum cost = %d", cost);
return 0;
}
int prims(){
int C[MAX][MAX];
int u, v, min_dist, dist[MAX], from[MAX];
int visited[MAX],ne,i,min_cost,j;

//create cost[][] matrix,spanning[][]


for(i=0; i<n; i++)

8
for(j=0; j<n; j++) {
if(G[i][j]==0)
C[i][j]=inf;
else
C[i][j]=G[i][j];
S[i][j]=0;
}

//initialise visited[],distance[] and from[]


dist[0]=0;
visited[0]=1;
for(i=1; i<n; i++) {
dist[i] = C[0][i];
from[i] = 0;
visited[i] = 0;
}
min_cost = 0; //cost of spanning tree
ne = n-1; //no. of edges to be added
while(ne > 0) {

//find the vertex at minimum distance from the tree


min_dist = inf;
for(i=1; i<n; i++)
if(visited[i] == 0 &&dist[i] <min_dist) {
v = i;
min_dist = dist[i];
}
u = from[v];

//insert the edge in spanning tree


S[u][v] = dist[v];
S[v][u] = dist[v];
ne--;
visited[v]=1;

//updated the distance[] array


for(i=1; i<n; i++)
if(visited[i] == 0 && C[i][v] <dist[i]) {
dist[i] = C[i][v];
from[i] = v;
}
min_cost = min_cost + C[u][v];
}
return(min_cost);
}

9
Kruskal’s Minimal Spanning Tree:
Kruskal’s minimal spanning tree algorithm is one of the efficient methods to find the minimum spanning tree of
a graph. A minimum spanning tree is a subgraph that connects all the vertices present in the main graph with
the least possible edges and minimum cost (sum of the weights assigned to each edge).

The algorithm first starts from the forest – which is defined as a subgraph containing only vertices of the main
graph – of the graph, adding the least cost edges later until the minimum spanning tree is created without
forming cycles in the graph.

Kruskal’s Algorithm
The inputs taken by the kruskal’s algorithm are the graph G {V, E}, where V is the set of vertices and E
is the set of edges, and the source vertex S and the minimum spanning tree of graph G is obtained as an
output.

Algorithm

 Sort all the edges in the graph in an ascending order and store it in an array edge[].
Construct the forest of the graph on a plane with all the vertices in it.
 Select the least cost edge from the edge[] array and add it into the forest of the graph. Mark the
vertices visited by adding them into the visited[] array.
 Repeat the steps 2 and 3 until all the vertices are visited without having any cycles forming in the graph
 When all the vertices are visited, the minimum spanning tree is formed.
 Calculate the minimum cost of the output spanning tree formed.

Examples
Construct a minimum spanning tree using kruskal’s algorithm for the graph given below –

10
Solution
As the first step, sort all the edges in the given graph in an ascending order and store the values in an array.
Edge B→D A→B C→F F→E B→C G→F A→G C→D D→E C→G
Cost 5 6 9 10 11 12 15 17 22 25
Then, construct a forest of the given graph on a single plane.

From the list of sorted edge costs, select the least cost edge and add it onto the forest in output graph.

B→D=5

Minimum cost = 5

Visited array, v = {B, D}

11
Similarly, the next least cost edge is B → A = 6; so we add it onto the output graph.

Minimum cost = 5 + 6 = 11

Visited array, v = {B, D, A}

The next least cost edge is C → F = 9; add it onto the output graph.

Minimum Cost = 5 + 6 + 9 = 20

Visited array, v = {B, D, A, C, F}

12
The next edge to be added onto the output graph is F → E = 10.

Minimum Cost = 5 + 6 + 9 + 10 = 30

Visited array, v = {B, D, A, C, F, E}

The next edge from the least cost array is B → C = 11, hence we add it in the output graph.

Minimum cost = 5 + 6 + 9 + 10 + 11 = 41

Visited array, v = {B, D, A, C, F, E}

13
The last edge from the least cost array to be added in the output graph is F → G = 12.

Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53

Visited array, v = {B, D, A, C, F, E, G}

The obtained result is the minimum spanning tree of the given graph with cost = 53.

Program:

#include <stdio.h>

14
#include <stdlib.h>

// Comparator function to use in sorting

int comparator(const void* p1, const void* p2)

const int(*x)[3] = p1;

const int(*y)[3] = p2;

return (*x)[2] - (*y)[2];

// Initialization of parent[] and rank[] arrays

void makeSet(int parent[], int rank[], int n)

for (int i = 0; i< n; i++) {

parent[i] = i;

rank[i] = 0;

// Function to find the parent of a node

int findParent(int parent[], int component)

if (parent[component] == component)

return component;

return parent[component]

= findParent(parent, parent[component]);

15
// Function to unite two sets

void unionSet(int u, int v, int parent[], int rank[], int n)

// Finding the parents

u = findParent(parent, u);

v = findParent(parent, v);

if (rank[u] < rank[v]) {

parent[u] = v;

else if (rank[u] > rank[v]) {

parent[v] = u;

else {

parent[v] = u;

// Since the rank increases if

// the ranks of two sets are same

rank[u]++;

// Function to find the MST

void kruskalAlgo(int n, int edge[n][3])

// First we sort the edge array in ascending order

// so that we can access minimum distances/cost

qsort(edge, n, sizeof(edge[0]), comparator);

int parent[n];

16
int rank[n];

// Function to initialize parent[] and rank[]

makeSet(parent, rank, n);

// To store the minimun cost

int minCost = 0;

printf( "Following are the edges in the constructed MST\n");

for (int i = 0; i< n; i++) {

int v1 = findParent(parent, edge[i][0]);

int v2 = findParent(parent, edge[i][1]);

int wt = edge[i][2];

// If the parents are different that

// means they are in different sets so

// union them

if (v1 != v2) {

unionSet(v1, v2, parent, rank, n);

minCost += wt;

printf("%d -- %d == %d\n", edge[i][0],

edge[i][1], wt);

printf("Minimum Cost Spanning Tree: %d\n", minCost);

// Driver code

int main()

17
int edge[5][3] = { { 0, 1, 10 },

{ 0, 2, 6 },

{ 0, 3, 5 },

{ 1, 3, 15 },

{ 2, 3, 4 } };

kruskalAlgo(5, edge);

return 0;

Single source shortest path:

The Single-Pair Shortest Path (SPSP) problem consists of finding the shortest path between a single
pair of vertices. This problem is mostly solved using Dijkstra, though in this case a single result is kept
and other shortest paths are discarded.

Dijkstra’s Shortest Path Algorithm:

Dijkstra’s Algorithm

The dijkstra’s algorithm is designed to find the shortest path between two vertices of a graph. These
two vertices could either be adjacent or the farthest points in the graph. The algorithm starts from the
source. The inputs taken by the algorithm are the graph G {V, E}, where V is the set of vertices and E
is the set of edges, and the source vertex S. And the output is the shortest path spanning tree.

Algorithm

Declare two arrays − distance[] to store the distances from the source vertex to the other vertices in
graph and visited[] to store the visited vertices.

Set distance[S] to ‘0’ and distance[v] = ∞, where v represents all the other vertices in the graph.

Add S to the visited[] array and find the adjacent vertices of S with the minimum distance.

The adjacent vertex to S, say A, has the minimum distance and is not in the visited array yet. A is
picked and added to the visited array and the distance of A is changed from ∞ to the assigned distance
of A, say d1, where d1 < ∞.

Repeat the process for the adjacent vertices of the visited vertices until the shortest path spanning tree is
formed.

18
Examples

To understand the dijkstra’s concept better, let us analyze the algorithm with the help of an example
graph −

Step 1

Initialize the distances of all the vertices as ∞, except the source node S.

Vertex S A B C D
Distance 0 ∞ ∞ ∞ ∞
Now that the source vertex S is visited, add it into the visited array.

visited = {S}

Step 2

The vertex S has three adjacent vertices with various distances and the vertex with minimum distance
among them all is A. Hence, A is visited and the dist[A] is changed from ∞ to 6.

S→A=6

S→D=8

S→E=7

Vertex S A B C D E
Distance 0 6 ∞ ∞ 8 7

19
Visited = {S, A}

Step 3

There are two vertices visited in the visited array, therefore, the adjacent vertices must be checked for
both the visited vertices.

Vertex S has two more adjacent vertices to be visited yet: D and E. Vertex A has one adjacent vertex B.

Calculate the distances from S to D, E, B and select the minimum distance −

S → D = 8 and S → E = 7.

S → B = S → A + A → B = 6 + 9 = 15

Vertex S A B C D E
Distance 0 6 15 ∞ 8 7
Visited = {S, A, E}

Step 4

Calculate the distances of the adjacent vertices – S, A, E – of all the visited arrays and select the vertex
with minimum distance.

20
S→D=8

S → B = 15

S → C = S → E + E → C = 7 + 5 = 12

Vertex S A B C D E
Distance 0 6 15 12 8 7
Visited = {S, A, E, D}

Step 5

Recalculate the distances of unvisited vertices and if the distances minimum than existing distance is
found, replace the value in the distance array.

S → C = S → E + E → C = 7 + 5 = 12

S → C = S → D + D → C = 8 + 3 = 11

dist[C] = minimum (12, 11) = 11

S → B = S → A + A → B = 6 + 9 = 15

S → B = S → D + D → C + C → B = 8 + 3 + 12 = 23

dist[B] = minimum (15,23) = 15

Vertex S A B C D E
Distance 0 6 15 11 8 7
Visited = { S, A, E, D, C}

21
Step 6

The remaining unvisited vertex in the graph is B with the minimum distance 15, is added to the output
spanning tree.

Visited = {S, A, E, D, C, B}

The shortest path spanning tree is obtained as an output using the dijkstra’s algorithm

Program:

#include<stdio.h>

#include<limits.h>

#include<stdbool.h>

22
int min_dist(int[], bool[]);

void greedy_dijsktra(int[][6],int);

int min_dist(int dist[], bool visited[]){ // finding minimum dist

int minimum=INT_MAX,ind;

for(int k=0; k<6; k++) {

if(visited[k]==false && dist[k]<=minimum) {

minimum=dist[k];

ind=k;

return ind;

void greedy_dijsktra(int graph[6][6],int src){

int dist[6];

bool visited[6];

for(int k = 0; k<6; k++) {

dist[k] = INT_MAX;

visited[k] = false;

dist[src] = 0; // Source vertex dist is set 0

for(int k = 0; k<6; k++) {

int m=min_dist(dist,visited);

visited[m]=true;

for(int k = 0; k<6; k++) {

// updating the dist of neighbouring vertex

if(!visited[k] && graph[m][k] && dist[m]!=INT_MAX && dist[m]+graph[m][k]<dist[k])

23
dist[k]=dist[m]+graph[m][k];

printf("Vertex\t\tdist from source vertex\n");

for(int k = 0; k<6; k++) {

char str=65+k;

printf("%c\t\t\t%d\n", str, dist[k]);

int main(){

int graph[6][6]= {

{0, 1, 2, 0, 0, 0},

{1, 0, 0, 5, 1, 0},

{2, 0, 0, 2, 3, 0},

{0, 5, 2, 0, 2, 2},

{0, 1, 3, 2, 0, 1},

{0, 0, 0, 2, 1, 0}

};

greedy_dijsktra(graph,0);

return 0;
}
Job Sequencing with Deadline:

Job scheduling algorithm is applied to schedule the jobs on a single processor to maximize the profits.

The greedy approach of the job scheduling algorithm states that, “Given ‘n’ number of jobs with a
starting time and ending time, they need to be scheduled in such a way that maximum profit is received
within the maximum deadline”.

Job Scheduling Algorithm

24
Set of jobs with deadlines and profits are taken as an input with the job scheduling algorithm and
scheduled subset of jobs with maximum profit are obtained as the final output.

Algorithm

Find the maximum deadline value from the input set of jobs.

Once, the deadline is decided, arrange the jobs in descending order of their profits.

Selects the jobs with highest profits, their time periods not exceeding the maximum deadline.

The selected set of jobs are the output.

Examples

Consider the following tasks with their deadlines and profits. Schedule the tasks in such a way that they
produce maximum profit after being executed –

S. No. 1 2 3 4 5
Jobs J1 J2 J3 J4 J5
Deadlines 2 2 1 3 4
Profits 20 60 40 100 80
Step 1

Find the maximum deadline value, dm, from the deadlines given.

dm = 4.

Step 2

Arrange the jobs in descending order of their profits.

S. No. 1 2 3 4 5
Jobs J4 J5 J2 J3 J1
Deadlines 3 4 2 1 2
Profits 100 80 60 40 20
The maximum deadline, dm, is 4. Therefore, all the tasks must end before 4.

Choose the job with highest profit, J4. It takes up 3 parts of the maximum deadline.

Therefore, the next job must have the time period 1.

Total Profit = 100.

Step 3

The next job with highest profit is J5. But the time taken by J5 is 4, which exceeds the deadline by 3.
Therefore, it cannot be added to the output set.

Step 4

25
The next job with highest profit is J2. The time taken by J5 is 2, which also exceeds the deadline by 1.
Therefore, it cannot be added to the output set.

Step 5

The next job with higher profit is J3. The time taken by J3 is 1, which does not exceed the given
deadline. Therefore, J3 is added to the output set.

Total Profit: 100 + 40 = 140

Step 6

Since, the maximum deadline is met, the algorithm comes to an end. The output set of jobs scheduled
within the deadline are {J4, J3} with the maximum profit of 140.

Program:

#include <stdio.h>

#include <stdlib.h>

#include <stdbool.h>

// A structure to represent a Jobs

typedef struct Jobs {

char id; // Jobs Id

int dead; // Deadline of Jobs

int profit; // Profit if Jobs is over before or on deadline

} Jobs;

// This function is used for sorting all Jobss according to

// profit

int compare(const void* a, const void* b){

Jobs* temp1 = (Jobs*)a;

Jobs* temp2 = (Jobs*)b;

return (temp2->profit - temp1->profit);

26
// Find minimum between two numbers.

int min(int num1, int num2){

return (num1 > num2) ? num2 : num1;

int main(){

Jobs arr[] = { { 'a', 2, 100 },

{ 'b', 2, 20 },

{ 'c', 1, 40 },

{ 'd', 3, 35 },

{ 'e', 1, 25 }

};

int n = sizeof(arr) / sizeof(arr[0]);

printf("Following is maximum profit sequence of Jobs \n");

qsort(arr, n, sizeof(Jobs), compare);

int result[n]; // To store result sequence of Jobs

bool slot[n]; // To keep track of free time slots

// Initialize all slots to be free

for (int i = 0; i< n; i++)

slot[i] = false;

// Iterate through all given Jobs

for (int i = 0; i< n; i++) {

// Find a free slot for this Job

for (int j = min(n, arr[i].dead) - 1; j >= 0; j--) {

// Free slot found

if (slot[j] == false) {

27
result[j] = i;

slot[j] = true;

break;

// Print the result

for (int i = 0; i< n; i++)

if (slot[i])

printf("%c ", arr[result[i]].id);

return 0;

Optimal Storage on Tapes:

Optimal Storage on Tapes is one of the applicationof the Greedy Method.

• The objective is to find the Optimal retrieval timefor accessing programs that are stored on
tape.Description

• There are n programs that are to be stored on a computertape of length L.

• Associated with each program i is a length l;

• Clearly, all programs can be stored on the tape if and only ifthe sum of the lengths of the programs is
at most L.

• We shall assume that whenever a program is to beretrieved from this tape, the tape is
initiallypositioned atthe front.

• Hence' if the programs are stored in the order I=i1, i2’ i3 …. in the time tjneeded to retrieve program i jis
proportional to lik.

28
• If all programs are retrieved equally often then the expected or mean retrieval time

(MRT) is

Example:

Example 1: Let n=3 and (l1,l2,l3)=(5,10,3). There are n!=6 possible orderings. These orderings and their
respective D values are:

Ordering I D(I)

1,2,3 5+5+10+5+10+3=38

1,3,2 5+5+3+5+3+10=31

2,1,3 10+10+5+10+5+3=43

2,3,1 10+10+3+10+3+5=41

3,1,2 3+3+5+3+5+10=29

3,2,1 3+3+10+3+10+5=34

The optimal ordering is 3,1,2

Method

• The greedy method simply requires us to storethe programs in non-decreasing order of theirlengths.

• This ordering (sorting) can be carried out inO(n log n) time using an efficient sortingalgorithm

Optimal Merge Pattern:

Merge a set of sorted files of different lengths into a single sorted file. We need to find an optimal
solution, where the resultant file will be generated in minimum time.

If the number of sorted files is given, there are many ways to merge them into a single sorted file. This
merge can be performed pair wise. Hence, this type of merging is called a 2-way merge patterns.

As different pairings require different amounts of time, in this strategy we want to determine an optimal
way of merging many files together. At each step, the two shortest sequences are merged.

29
To merge a p-record file and a q-record file requires possibly p + q record moves, the obvious choice
being, merge the two smallest files together at each step.

Two-way merge patterns can be represented by binary merge trees. Let us consider a set of n sorted
files {f1, f2, f3, …, fn}. Initially, each element of this is considered as a single node binary tree. To find
this optimal solution, the following algorithm is used.

Algorithm: TREE (n)

for i := 1 to n – 1 do

declare new node

node.leftchild := least (list)

node.rightchild := least (list)

node.weight) := ((node.leftchild).weight) + ((node.rightchild).weight)

insert (list, node);

return least (list);

At the end of this algorithm, the weight of the root node represents the optimal cost.

Example

Let us consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and 30 number of elements
respectively.

If merge operations are performed according to the provided sequence, then

M1 = merge f1 and f2 => 20 + 30 = 50

M2 = merge M1 and f3 => 50 + 10 = 60

M3 = merge M2 and f4 => 60 + 5 = 65

M4 = merge M3 and f5 => 65 + 30 = 95

Hence, the total number of operations is

50 + 60 + 65 + 95 = 270

Now, the question arises is there any better solution?

Sorting the numbers according to their size in an ascending order, we get the following sequence

f4, f3, f1, f2, f5

30
Hence, merge operations can be performed on this sequence

M1 = merge f4 and f3 => 5 + 10 = 15

M2 = merge M1 and f1 => 15 + 20 = 35

M3 = merge M2 and f2 => 35 + 30 = 65

M4 = merge M3 and f5 => 65 + 30 = 95

Therefore, the total number of operations is

15 + 35 + 65 + 95 = 210

Obviously, this is better than the previous one.

In this context, we are now going to solve the problem using this algorithm.

Initial Set:

Step 1

Step 2:

Step 3:

31
Step 4:

Hence, the solution takes 15 + 35 + 60 + 95 = 205 number of comparisons.

Dynamic Programming:

Dynamic programming approach is similar to divide and conquer in breaking down the problem into
smaller and yet smaller possible sub-problems. But unlike divide and conquer, these sub-problems are
not solved independently. Rather, results of these smaller sub-problems are remembered and used for
similar or overlapping sub-problems.

Mostly, dynamic programming algorithms are used for solving optimization problems. Before solving
the in-hand sub-problem, dynamic algorithm will try to examine the results of the previously solved
sub-problems. The solutions of sub-problems are combined in order to achieve the best optimal final
solution.

All-Pairs Shortest Paths:

The all pair shortest path algorithm is also known as Floyd-Warshall algorithm is used to find all pair
shortest path problem from a given weighted graph. As a result of this algorithm, it will generate a
matrix, which will represent the minimum distance from any node to all other nodes in the graph.

Floyd-Warshall algorithm works on both directed and undirected weighted graphs unless these graphs
do not contain any negative cycles in them. By negative cycles, it is meant that the sum of all the edges
in the graph must not lead to a negative number.

Since, the algorithm deals with overlapping sub-problems – the path found by the vertices acting as
pivot are stored for solving the next steps – it uses the dynamic programming approach.

32
Floyd-Warshall algorithm is one of the methods in All-pairs shortest path algorithms and it is solved
using the Adjacency Matrix representation of graphs.

Floyd-Warshall Algorithm

Consider a graph, G = {V, E} where V is the set of all vertices present in the graph and E is the set of
all the edges in the graph. The graph, G, is represented in the form of an adjacency matrix, A, that
contains all the weights of every edge connecting two vertices.

Algorithm

Step 1 − Construct an adjacency matrix A with all the costs of edges present in the graph. If there is no
path between two vertices, mark the value as ∞.

Step 2 − Derive another adjacency matrix A1 from A keeping the first row and first column of the
original adjacency matrix intact in A1. And for the remaining values, say A1[i,j], if
A[i,j]>A[i,k]+A[k,j] then replace A1[i,j] with A[i,k]+A[k,j]. Otherwise, do not change the values. Here,
in this step, k = 1 (first vertex acting as pivot).

Step 3 − Repeat Step 2 for all the vertices in the graph by changing the k value for every pivot vertex
until the final matrix is achieved.

Step 4 − The final adjacency matrix obtained is the final solution with all the shortest paths.

Example

Consider the following directed weighted graph G = {V, E}. Find the shortest paths between all the
vertices of the graphs using the Floyd-Warshall algorithm.

33
Solution

Step 1

Construct an adjacency matrix A with all the distances as values.

A=0∞3∞250∞∞∞∞102∞6∞405∞7∞30

Step 2

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A=0∞3∞25∞6∞

A1=0∞3∞2508∞7∞102∞6∞405∞7∞30

Step 3

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A2=∞508∞71∞7

A2=0∞3∞2508∞7610286∞4051271530

Step 4

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A3=3861028415

A3=0435250810761028654051271530

Step 5

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A4=5102654053

A4=04352508107610276540597730

Step 6

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

34
A5=277597730

A5=04352508107610276540597730

Program:

#include<stdio.h>

int min(int,int);

void floyds(int p[10][10],int n){

int i,j,k;

for (k=1; k<=n; k++)

for (i=1; i<=n; i++)

for (j=1; j<=n; j++)

if(i==j)

p[i][j]=0;

else

p[i][j]=min(p[i][j],p[i][k]+p[k][j]);

int min(int a,int b){

if(a<b)

return(a);

else

return(b);

void main(){

int w,n,e,i,j;

n = 3;

e = 3;

int p[10][10];

35
for (i=1; i<=n; i++) {

for (j=1; j<=n; j++)

p[i][j]=999;

p[1][2] = 10;

p[2][3] = 15;

p[3][1] = 12;

printf("\n Matrix of input data:\n");

for (i=1; i<=n; i++) {

for (j=1; j<=n; j++)

printf("%d \t",p[i][j]);

printf("\n");

floyds(p,n);

printf("\n Transitive closure:\n");

for (i=1; i<=n; i++) {

for (j=1; j<=n; j++)

printf("%d \t",p[i][j]);

printf("\n");

printf("\n The shortest paths are:\n");

for (i=1; i<=n; i++)

for (j=1; j<=n; j++) {

if(i!=j)

printf("\n <%d,%d>=%d",i,j,p[i][j]);

36
}

Optimal Binary Search Tree:

In binary search tree, the nodes in the left subtree have lesser value than the root node and the nodes in
the right subtree have greater value than the root node.

We know the key values of each node in the tree, and we also know the frequencies of each node in
terms of searching means how much time is required to search a node. The frequency and key-value
determine the overall cost of searching a node. The cost of searching is a very important factor in
various applications. The overall cost of searching a node should be less. The time required to search a
node in BST is more than the balanced binary search tree as a balanced binary search tree contains a
lesser number of levels than the BST. There is one way that can reduce the cost of a binary search tree
is known as an optimal binary search tree.

Let's understand through an example.

If the keys are 10, 20, 30, 40, 50, 60, 70.

In the above tree, all the nodes on the left subtree are smaller than the value of the root node, and all the
nodes on the right subtree are larger than the value of the root node. The maximum time required to
search a node is equal to the minimum height of the tree, equal to logn.

Now we will see how many binary search trees can be made from the given number of keys.

For example: 10, 20, 30 are the keys, and the following are the binary search trees that can be made out
from these keys.

The Formula for calculating the number of trees:

37
When we use the above formula, then it is found that total 5 number of trees can be created.

The cost required for searching an element depends on the comparisons to be made to search an
element. Now, we will calculate the average cost of time of the above binary search trees.

In the above tree, total number of 3 comparisons can be made. The average number of comparisons can
be made as:

In the above tree, the average number of comparisons that can be made as:

In the above tree, the average number of comparisons that can be made as:

38
In the above tree, the total number of comparisons can be made as 3. Therefore, the average number of
comparisons that can be made as:

In the above tree, the total number of comparisons can be made as 3. Therefore, the average number of
comparisons that can be made as:

In the third case, the number of comparisons is less because the height of the tree is less, so it's a
balanced binary search tree.

To find the optimal binary search tree, we will determine the frequency of searching a key.

Let's assume that frequencies associated with the keys 10, 20, 30 are 3, 2, 5.

39
The above trees have different frequencies. The tree with the lowest frequency would be considered the
optimal binary search tree. The tree with the frequency 17 is the lowest, so it would be considered as
the optimal binary search tree.

Dynamic Approach:

Consider the below table, which contains the keys and frequencies.

irst, we will calculate the values where j-i is equal to zero.

When i=0, j=0, then j-i = 0

When i = 1, j=1, then j-i = 0

When i = 2, j=2, then j-i = 0

When i = 3, j=3, then j-i = 0

When i = 4, j=4, then j-i = 0

Therefore, c[0, 0] = 0, c[1 , 1] = 0, c[2,2] = 0, c[3,3] = 0, c[4,4] = 0

Now we will calculate the values where j-i equal to 1.

When j=1, i=0 then j-i = 1

When j=2, i=1 then j-i = 1

When j=3, i=2 then j-i = 1

40
When j=4, i=3 then j-i = 1

Now to calculate the cost, we will consider only the jth value.

The cost of c[0,1] is 4 (The key is 10, and the cost corresponding to key 10 is 4).

The cost of c[1,2] is 2 (The key is 20, and the cost corresponding to key 20 is 2).

The cost of c[2,3] is 6 (The key is 30, and the cost corresponding to key 30 is 6)

The cost of c[3,4] is 3 (The key is 40, and the cost corresponding to key 40 is 3)

Now we will calculate the values where j-i = 2

When j=2, i=0 then j-i = 2

When j=3, i=1 then j-i = 2

When j=4, i=2 then j-i = 2

In this case, we will consider two keys.

When i=0 and j=2, then keys 10 and 20. There are two possible trees that can be made out from these
two keys shown below:

41
In the first binary tree, cost would be: 4*1 + 2*2 = 8

In the second binary tree, cost would be: 4*2 + 2*1 = 10

The minimum cost is 8; therefore, c[0,2] = 8

When i=1 and j=3, then keys 20 and 30. There are two possible trees that can be made out from these
two keys shown below:

In the first binary tree, cost would be: 1*2 + 2*6 = 14

In the second binary tree, cost would be: 1*6 + 2*2 = 10

The minimum cost is 10; therefore, c[1,3] = 10

When i=2 and j=4, we will consider the keys at 3 and 4, i.e., 30 and 40. There are two possible trees
that can be made out from these two keys shown as below:

In the first binary tree, cost would be: 1*6 + 2*3 = 12


42
In the second binary tree, cost would be: 1*3 + 2*6 = 15

The minimum cost is 12, therefore, c[2,4] = 12

Now we will calculate the values when j-i = 3

When j=3, i=0 then j-i = 3

When j=4, i=1 then j-i = 3

When i=0, j=3 then we will consider three keys, i.e., 10, 20, and 30.

The following are the trees that can be made if 10 is considered as a root node.

In the above tree, 10 is the root node, 20 is the right child of node 10, and 30 is the right child of node
20.

43
Cost would be: 1*4 + 2*2 + 3*6 = 26

In the above tree, 10 is the root node, 30 is the right child of node 10, and 20 is the left child of node
20.

Cost would be: 1*4 + 2*6 + 3*2 = 22

The following tree can be created if 20 is considered as the root node.

In the above tree, 20 is the root node, 30 is the right child of node 20, and 10 is the left child of node
20.

Cost would be: 1*2 + 4*2 + 6*2 = 22

The following are the trees that can be created if 30 is considered as the root node.

44
In the above tree, 30 is the root node, 20 is the left child of node 30, and 10 is the left child of node 20.

Cost would be: 1*6 + 2*2 + 3*4 = 22

In the above tree, 30 is the root node, 10 is the left child of node 30 and 20 is the right child of node 10.

Cost would be: 1*6 + 2*4 + 3*2 = 20

Therefore, the minimum cost is 20 which is the 3rd root. So, c[0,3] is equal to 20.

When i=1 and j=4 then we will consider the keys 20, 30, 40

c[1,4] = min{ c[1,1] + c[2,4], c[1,2] + c[3,4], c[1,3] + c[4,4] } + 11

= min{0+12, 2+3, 10+0}+ 11

= min{12, 5, 10} + 11

The minimum value is 5; therefore, c[1,4] = 5+11 = 16

45
Now we will calculate the values when j-i = 4

When j=4 and i=0 then j-i = 4

In this case, we will consider four keys, i.e., 10, 20, 30 and 40. The frequencies of 10, 20, 30 and 40 are
4, 2, 6 and 3 respectively.

w[0, 4] = 4 + 2 + 6 + 3 = 15

If we consider 10 as the root node then

C[0, 4] = min {c[0,0] + c[1,4]}+ w[0,4]

= min {0 + 16} + 15= 31

If we consider 20 as the root node then

C[0,4] = min{c[0,1] + c[2,4]} + w[0,4]

= min{4 + 12} + 15

= 16 + 15 = 31

If we consider 30 as the root node then,

C[0,4] = min{c[0,2] + c[3,4]} +w[0,4]

= min {8 + 3} + 15

46
= 26

If we consider 40 as the root node then,

C[0,4] = min{c[0,3] + c[4,4]} + w[0,4]

= min{20 + 0} + 15

= 35

In the above cases, we have observed that 26 is the minimum cost; therefore, c[0,4] is equal to 26.

The optimal binary tree can be created as:

47
General formula for calculating the minimum cost is:

C[i,j] = min{c[i, k-1] + c[k,j]} + w(i,j)

0-1 Knapsack:

48
in fractional knapsack, the items are always stored fully without using the fractional part of them. Its
either the item is added to the knapsack or not. That is why, this method is known as the 0-1 Knapsack
problem.

Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or 1, where other constraints remain the
same.

0-1 Knapsack cannot be solved by Greedy approach. Greedy approach does not ensure an optimal
solution in this method. In many instances, Greedy approach may give an optimal solution.

0-1 Knapsack Algorithm

Problem Statement − A thief is robbing a store and can carry a maximal weight of W into his knapsack.
There are n items and weight of ith item is wi and the profit of selecting this item is pi. What items
should the thief take?

Let i be the highest-numbered item in an optimal solution S for W dollars. Then S’ = S − {i} is an
optimal solution for W – wi dollars and the value to the solution S is Vi plus the value of the sub-
problem.

We can express this fact in the following formula: define c[i, w] to be the solution for items 1,2, … , i
and the maximum weight w.

The algorithm takes the following inputs

The maximum weight W

The number of items n

The two sequences v = <v1, v2, …, vn> and w = <w1, w2, …, wn>

The set of items to take can be deduced from the table, starting at c[n, w] and tracing backwards where
the optimal values came from.

If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue tracing with c[i-1, w].
Otherwise, item i is part of the solution, and we continue tracing with c [i-1, w-W].

Dynamic-0-1-knapsack (v, w, n, W)

for w = 0 to W do

c[0, w] = 0

for i = 1 to n do

c[i, 0] = 0

for w = 1 to W do

49
if wi ≤ w then

if vi + c[i-1, w-wi] then

c[i, w] = vi + c[i-1, w-wi]

else c[i, w] = c[i-1, w]

else

c[i, w] = c[i-1, w]

The following examples will establish our statement.

Example

Let us consider that the capacity of the knapsack is W = 8 and the items are as shown in the following
table.

Item A B C D
Profit 2 4 7 10
Weight 1 3 5 7
Solution

Using the greedy approach of 0-1 knapsack, the weight that’s stored in the knapsack would be A+B = 4
with the maximum profit 2 + 4 = 6. But, that solution would not be the optimal solution.

Therefore, dynamic programming must be adopted to solve 0-1 knapsack problems.

Step 1

Construct an adjacency table with maximum weight of knapsack as rows and items with respective
weights and profits as columns.

Values to be stored in the table are cumulative profits of the items whose weights do not exceed the
maximum weight of the knapsack (designated values of each row)

So we add zeroes to the 0th row and 0th column because if the weight of item is 0, then it weighs
nothing; if the maximum weight of knapsack is 0, then no item can be added into the knapsack.

50
The remaining values are filled with the maximum profit achievable with respect to the items and
weight per column that can be stored in the knapsack.

The formula to store the profit values is −

c[i,w]=max{c[i−1,w−w[i]]+P[i]}

By computing all the values using the formula, the table obtained would be –

51
To find the items to be added in the knapsack, recognize the maximum profit from the table and
identify the items that make up the profit, in this example, its {1, 7}.

The optimal solution is {1, 7} with the maximum profit is 12.

Analysis

This algorithm takes Ɵ(n.w) times as table c has (n+1).(w+1) entries, where each entry requires Ɵ(1)
time to compute.

Program:

#include <stdio.h>

#include <string.h>

int findMax(int n1, int n2){

if(n1>n2) {

return n1;

} else {

return n2;

52
int knapsack(int W, int wt[], int val[], int n){

int K[n+1][W+1];

for(int i = 0; i<=n; i++) {

for(int w = 0; w<=W; w++) {

if(i == 0 || w == 0) {

K[i][w] = 0;

} else if(wt[i-1] <= w) {

K[i][w] = findMax(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w]);

} else {

K[i][w] = K[i-1][w];

return K[n][W];

int main(){

int val[5] = {70, 20, 50};

int wt[5] = {11, 12, 13};

int W = 30;

int len = sizeof val / sizeof val[0];

printf("Maximum Profit achieved with this knapsack: %d", knapsack(W, wt, val, len));

Reliability Design:

53
The reliability design problem is the designing of a system composed of several devices connected in
series or parallel. Reliability means the probability to get the success of the device.

Let’s say, we have to set up a system consisting of D1, D2, D3, …………, and Dn devices, each device
has some costs C1, C2, C3, …….., Cn. Each device has a reliability of 0.9 then the entire system has
reliability which is equal to the product of the reliabilities of all devices i.e., πri = (0.9)4.

It means that 35% of the system has a chance to fail, due to the failure of any one device. the problem is
that we want to construct a system whose reliability is maximum. How it can be done so? we can think
that we can take more than one copy of each device so that if one device fails we can use the copy of
that device, which means we can connect the devices parallel.

When the same type of 3 devices is connected parallelly in stage 1 having a reliability 0.9 each then:

Reliability of device1, r1= 0.9

The probability that device does not work well = 1 – r1 = 1 – 0.9 = 0.1

The probability that all three copies failed = ( 1-r1 )3 = (0.1)3 = 0.001

The Probability that all three copies work properly = 1 – (1- r1)3 = 1- 0.001 = 0.999

We can see that the system with multiple copies of the same device parallel may increase the reliability
of the system.

Given a cost C and we have to set up a system by buying the devices and we need to find number of
copies of each device under the cost such that reliability of a system is maximized.

We have to design a three-stage system with device types D1, D2, and D3. The costs are $30, $15, and
$20 respectively. The cost of the system is to be no more than $105. The reliability of each device type
is 0.9, 0.8, and 0.5 respectively.

54
Pi Ci ri
P1 30 0.9
P2 15 0.8
P3 20 0.5
Explanation:

Given that we have total cost C = 105,

sum of all Ci = 30 + 15 + 20 = 65, the remaining amount we can use to buy a copy of each device in
such a way that the reliability of the system, may increase.

Remaining amount = C – sum of Ci = 105 – 65 = 40

Now, let us calculate how many copies of each device we can buy with $40, If consume all $40 in
device1 then we can buy 40/30 = 1 and 1 copy we have already so overall 2 copies of device1. Now In
general, we can have the formula to calculate the upper bond of each device:

Ui = floor( C – ∑Ci / Ci ) + 1 (1 is added because we have one copy of each device before)

C1=30, C2=15, C3=20, C=105

r1=0.9, r2=0.8, r3=0.5

u1 = floor ((105+30-(30+15+20))/30) = 2

u2 = 3

u3 = 3

A tuple is just an ordered pair containing reliability and total cost up to a choice of mi’s that has been
made. we can make pair in of Reliability and Cost of each stage like copySdevice

S0 = {(1,0)}

Device 1:

Each Si from Si-1 by trying out all possible values for ri and combining the resulting tuples together.

let us consider P1 :

1S1 = {(0.9, 30)} where 0.9 is the reliability of stage1 with a copy of one device and 30 is the cost of
P1.

Now, two copies of device1 so, we can take one more copy as:

2S1 = { (0.99, 60) } where 0.99 is the reliability of stage one with two copies of device, we can see
that it will come as: 1 – ( 1 – r1 )2 = 1 – (1 – 0.9)2 = 1 – 0.01 = 0.99 .

55
After combining both conditions of Stage1 i.e., with copy one and copy of 2 devices respectively.

S1 = { ( 0.9, 30 ), ( 0.99, 60 ) }

Device 2:

S2 will contain all reliability and cost pairs that we will get by taking all possible values for the stage2
in conjunction with possibilities calculated in S1.

First of all we will check the reliability at stage2 when we have 1, 2, and 3 as a copy of device. let us
assume that Ei is the reliability of the particular stage with n number of devices, so for S2 we first
calculate:

E2 (with copy 1) = 1 – ( 1 – r2 )1 = 1 – ( 1 – 0.8 ) = 0.8

E2 (with copy 2) = 1 – (1 – r2 )2 = 1 – (1 – 0.8 )2 = 0.96

E2 (with copy 3) = 1 – (1 – r2 )3 = 1 – ( 1 – 0.8 )3 = 0.992

If we use 1 copy of P1 and 1 copy of P2 reliability will be 0.9*0.8 and the cost will be 30+15

One Copy of Device two , 1S2 = { (0.8, 15) } Conjunction with S1 (0.9, 30) = { (0.72,45) }

Similarly, we can calculate other pairs as S2 = ( 0.72, 45 ), ( 0.792, 75), ( 0.864, 60), ( 0.98,90 ) }

We get ordered pair (0.98,90) in S2 when we take 2 copies of Device1and 2 copies of Device2
However, with the remaining cost of 15 (105 – 90), we cannot use device Device3 (we need a
minimum of 1 copy of every device in any stage), therefore ( 0.792, 75) should be discarded and other
ordered pairs like it. We get S2 = { ( 0.72, 45 ), ( 0.864, 60 ), ( 0.98,90 ) }. There are other possible
ordered pairs too, but all of them exceed cost limitations.

Up to this point we got ordered pairs:

S1 = { ( 0.9, 30), ( 0.99, 60 ) }

S2 = { ( 0.72, 45 ), ( 0.864, 60 ), ( 0.98,90 )}

Device 3:

First of all we will check the reliability at stage3 when we have 1, 2, and 3 as a copy of device. Ei is the
reliability of the particular stage with n number of devices, so for S3 we first calculate:

E3 (with copy 1) = 1 – ( 1 – r3 )1 = 1 – ( 1 – 0.5 ) = 0.5

E3 (with copy 2) = 1 – (1 – r3 )2 = 1 – (1 – 0.5 )2 = 0.75

E3 (with copy 3) = 1 – (1 – r3 )3 = 1 – ( 1 – 0.5 )3 = 0.875

56
Now, possible ordered pairs of device three are as- S3 = { ( 0.36, 65), ( 0.396, 95), ( 0.432, 80), ( 0.54,
85), ( 0.648, 100 ), ( 0.63, 105 ) }

(0.648,100) is the solution pair, 0.648 is the maximum reliability we can get under the cost constraint of
105.

Travelling Salesman Problem:

Travelling salesman problem is the most notorious computational problem. We can use brute-force
approach to evaluate every possible tour and select the best one. For n number of vertices in a graph,
there are (n−1)! number of possibilities. Thus, maintaining a higher complexity.

However, instead of using brute-force, using the dynamic programming approach will obtain the
solution in lesser time, though there is no polynomial time algorithm.

Travelling Salesman Dynamic Programming Algorithm

Let us consider a graph G = (V,E), where V is a set of cities and E is a set of weighted edges. An edge
e(u, v) represents that vertices u and v are connected. Distance between vertex u and v is d(u, v), which
should be non-negative.

Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence, this is a
partial tour. We certainly need to know j, since this will determine which cities are most convenient to
visit next. We also need to know all the cities visited so far, so that we don't repeat any of them. Hence,
this is an appropriate sub-problem.

For a subset of cities S ϵ

{1,2,3,...,n} that includes 1, and j ϵ

S, let C(S, j) be the length of the shortest path visiting each node in S exactly once, starting at 1 and
ending at j.

When |S|> 1 , we define 𝑪C(S,1)= ∝

since the path cannot start and end at 1.

Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end at j. We should
select the next city in such a way that

C(S,j)=minC(S−{j},i)+d(i,j)whereiϵSandi≠j

Algorithm: Traveling-Salesman-Problem

C ({1}, 1) = 0

for s = 2 to n do
57
for all subsets S є {1, 2, 3, … , n} of size s and containing 1

C (S, 1) = ∞

for all j є S and j ≠ 1

C (S, j) = min {C (S – {j}, i) + d(i, j) for i є S and i ≠ j}

Return minj C ({1, 2, 3, …, n}, j) + d(j, i)

Analysis

There are at the most 2n.n sub-problems and each one takes linear time to solve. Therefore, the total
running time is O(2n.n2).

Example

In the following example, we will illustrate the steps to solve the travelling salesman problem.

From the above graph, the following table is prepared.

1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
S=Φ

Cost(2,Φ,1)=d(2,1)=5

Cost(3,Φ,1)=d(3,1)=6

Cost(4,Φ,1)=d(4,1)=8

S=1

58
Cost(i,s)=min{Cos(j,s−(j))+d[i,j]}

Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15

Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18

Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18

Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20

Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15

Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13

S=2

Cost(2,{3,4},1)=min{d[2,3]+Cost(3,{4},1)=9+20=29d[2,4]+Cost(4,{3},1)=10+15=25=25

Cost(3,{2,4},1)=min{d[3,2]+Cost(2,{4},1)=13+18=31d[3,4]+Cost(4,{2},1)=12+13=25=25

Cost(4,{2,3},1)=min{d[4,2]+Cost(2,{3},1)=8+15=23d[4,3]+Cost(3,{2},1)=9+18=27=23

S=3

Cost(1,{2,3,4},1)=min⎧⎩⎨⎪⎪d[1,2]+Cost(2,{3,4},1)=10+25=35d[1,3]+Cost(3,{2,4},1)=15+25=40d[1
,4]+Cost(4,{2,3},1)=20+23=43=35

The minimum cost path is 35.

Program:

#include<stdio.h>

int ary[10][10],completed[10],n,cost=0;

void takeInput()

int i,j;

printf("Enter the number of villages: ");

scanf("%d",&n);

printf("\nEnter the Cost Matrix\n");

59
for(i=0;i < n;i++)

printf("\nEnter Elements of Row: %d\n",i+1);

for( j=0;j < n;j++)

scanf("%d",&ary[i][j]);

completed[i]=0;

printf("\n\nThe cost list is:");

for( i=0;i < n;i++)

printf("\n");

for(j=0;j < n;j++)

printf("\t%d",ary[i][j]);

void mincost(int city)

int i,ncity;

completed[city]=1;

printf("%d--->",city+1);

ncity=least(city);

if(ncity==999)

ncity=0;

printf("%d",ncity+1);

60
cost+=ary[city][ncity];

return;

mincost(ncity);

int least(int c)

int i,nc=999;

int min=999,kmin;

for(i=0;i < n;i++)

if((ary[c][i]!=0)&&(completed[i]==0))

if(ary[c][i]+ary[i][c] < min)

min=ary[i][0]+ary[c][i];

kmin=ary[c][i];

nc=i;

if(min!=999)

cost+=kmin;

return nc;

int main()

takeInput();

61
printf("\n\nThe Path is:\n");

mincost(0); //passing 0 because starting vertex

printf("\n\nMinimum cost is %d\n ",cost);

return 0;

62

You might also like