0% found this document useful (0 votes)
18 views79 pages

22CS303 Unit 4

This document is a confidential educational resource for the RMK Group of Educational Institutions, detailing the course 'Design and Analysis of Algorithms' for the 2022-2026 batch. It includes course objectives, prerequisites, a comprehensive syllabus, course outcomes, and various learning materials and assessments. The document emphasizes algorithmic techniques such as brute force, divide and conquer, dynamic programming, greedy techniques, and backtracking.

Uploaded by

chan22006.cd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views79 pages

22CS303 Unit 4

This document is a confidential educational resource for the RMK Group of Educational Institutions, detailing the course 'Design and Analysis of Algorithms' for the 2022-2026 batch. It includes course objectives, prerequisites, a comprehensive syllabus, course outcomes, and various learning materials and assessments. The document emphasizes algorithmic techniques such as brute force, divide and conquer, dynamic programming, greedy techniques, and backtracking.

Uploaded by

chan22006.cd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Please read this disclaimer before proceeding:

This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
22CS303
Design and Analysis of
Algorithms
Department: AIML
Batch/Year: 2022-2026/II
Created by:
Dr. Sudharson, ASP/AIML
Mrs. Remya Rose S, AP/AIML
Created on: 05.09.23
Table of Contents
Sl. Topics Page
No. No.
1. Contents 5
2. Course Objectives 6

3. Pre Requisites (Course Name with Code) 8

4. Syllabus (With Subject Code, Name, LTPC details) 10

5. Course Outcomes (6) 12

6. CO PO/PSO Mapping 14

Lecture Plan – Unit IV (S.No., Topic, No. of Periods, Proposed


7. date, Actual Lecture Date, pertaining CO, Taxonomy level, 16
Mode of Delivery)

8. Activity Based Learning 18

Lecture Notes ( with Links to Videos, e-book reference, PPTs,


9. 20
Quiz and any other learning materials )

Assignments ( For higher level learning and Evaluation -


10. 50
Examples: Case study, Comprehensive design, etc.,)

11. Part A Questions and Answers (with K level and CO) 53

12. Part B Questions (with K level and CO) 60

Supportive online Certification courses (NPTEL, Swayam,


13. 63
Coursera, Udemy, etc.,)

14. Real time applications in day to day life and to Industry 65

15. Content Beyond Syllabus ( COE related Value added courses) 67

16. Assessment Schedule ( Proposed Date & Actual Date) 72

17. Prescribed Text and Reference Books 74

18. Mini Project Suggestions 76


Course Objectives
Course Objectives
 Critically analyse the efficiency of alternative algorithmic solutions for
the same problem
 Illustrate brute force and divide and conquer design techniques.
 Explain dynamic programming for solving various problems.
 Apply greedy technique and iterative improvement technique to solve
optimization problems
 Examine the limitations of algorithmic power and handling it in
different problems.
PRE REQUISITES
Prerequisites

22CS201-Data Structures 22CS101-Problem Linear


Solving using Programming
C++ Problem &
Geometric
Trees Problem

Graphs

Searching
& Sorting
Syllabus
Syllabus L T P C
22CS303 DESIGN AND ANALYSIS OF ALGORITHMS
2 0 2 3
Unit I : INTRODUCTION 6+6
Notion of an Algorithm – Fundamentals of Algorithmic Problem Solving – Fundamentals of the
Analysis of Algorithmic Efficiency – Asymptotic Notations and their properties. Analysis
Framework – Empirical analysis - Mathematical analysis for Recursive and Non-recursive
algorithms.
List of Exercise/Experiments:
1.Perform the recursive algorithm analysis.
2.Perform the non-recursive algorithm analysis.
Unit II : BRUTE FORCE AND DIVIDE AND CONQUER 6+6
Brute Force – String Matching – Exhaustive Search - Knapsack Problem. Divide and Conquer
Methodology – Binary Search – Merge sort- Quick sort – Multiplication of Large Integers –
Closest-Pair and Convex Hull Problems -Transform and Conquer Method: Heap Sort
List of Exercise/Experiments:
1.Write a program to search an element using binary search.
2.Write a program to sort the elements using merge sort and find time complexity.
3.Write a program to sort the elements using quick sort and find time complexity.
4.Write a program to sort the elements using heap sort.
Unit III : DYNAMIC PROGRAMMING 6+6
Dynamic programming – Principle of optimality - Floyd‘s algorithm – Multi stage graph - Optimal
Binary Search Trees – Longest common subsequence - Matrix-chain multiplication – Travelling
Salesperson Problem – Knapsack Problem and Memory functions.
List of Exercise/Experiments:
1.Solve Floyd’s algorithm.
2.Write a program to find optimal binary search tree for a given list of keys.
3.Solve the multi-stage graph to find shortest path using backward and forward approach.
4.Write a program to find the longest common subsequence.
Unit IV : GREEDY TECHNIQUE AND ITERATIVE IMPROVEMENT 6+6
Greedy Technique - Prim‘s algorithm and Kruskal’s Algorithm, Huffman Trees. The Maximum-
Flow Problem – Maximum Matching in Bipartite Graphs- The Stable marriage Problem.
List of Exercise/Experiments:
1.Write a program to find minimum spanning tree using Prim’s algorithm.
2.Implement Kruskal’s algorithm to find minimum spanning tree.
3.Write a program to solve maximum flow problem.
Unit V : BACKTRACKING AND BRANCH AND BOUND 6+6
P, NP, NP - Complete and NP Hard Problems. Backtracking – N-Queen problem - Subset Sum
Problem. Branch and Bound – LIFO Search and FIFO search - Assignment problem – Knapsack
Problem - Approximation Algorithms for NP-Hard Problems – Travelling Salesman problem
List of Exercise/Experiments:
1.Write a program to implement sum of subset problem.
2.Write a program to solve N-Queen problem.
3.Solve the assignment problem using branch and bound technique.
4.Solve knapsack problem using branch and bound technique.
Course Outcomes
Course Outcomes
CO# COs K Level

CO1 Solve mathematically the efficiency of recursive and non-recursive


K3
algorithms
CO2 Design and Analyse the efficiency of brute force, divide and conquer,
K4
and transform and conquer algorithmic techniques
CO3
Implement and analyse the problems using dynamic programming K3

CO4 Solve the problems using greedy technique and iterative improvement
K3
technique for optimization.
CO5 Compute the limitations of algorithmic power and solve the problems
K3
using backtracking and branch and bound technique.

Knowledge Level Description

K6 Evaluation

K5 Synthesis

K4 Analysis

K3 Application

K2 Comprehension

K1 Knowledge
CO – PO/PSO Mapping
CO PO PO PO PO PO PO PO PO PO PO PO PO PSO PSO PS0
# 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3

CO1 3 3 2 1 1 - - 2 2 2 - 2 3 2 -

CO2 3 2 2 2 2 - - 2 2 2 - 2 3 2 -

CO3 3 3 2 2 2 - - 2 2 2 - 2 3 2 -

CO4 3 2 2 2 2 - - 2 2 2 - 2 3 3 -

CO5 3 2 2 2 2 - - 2 2 2 - 2 3 2 -

CO6 2 2 1 1 1 - - 2 2 2 - 2 3 2 -
Lecture Plan
Unit IV
Lecture Plan – UNIT 4
GREEDY TECHNIQUE AND ITERATIVE
IMPROVEMENT
Number Actual
Sl. Proposed Taxonomy Mode of
Topic of Lecture CO
No. Date Level Delivery
Periods Date

Greedy Chalk &


1
Technique - CO4 Talk
1 K3
Prim‘s
algorithm

2 Kruskal’s 1 CO4 Chalk &


Algorithm K3 Talk
Huffman Trees Chalk &
3 1 CO4
K2 Talk
The Maximum-
4 Flow Problem 1 CO4 Chalk &
K3
Talk
Maximum
Matching in K3 Chalk &
5 CO4
Bipartite Graphs 1 Talk

The Stable
marriage K3 Chalk &
6 CO4
Problem 1 Talk
Activity Based
Learning
Unit IV
Activity Based Learning

1. Simulation of Maximum Flow Problem using simulation


tool.

2. Guessing the picture

https://drive.google.com/file/d/16crrbMPt_LSeK8feZSoh2tUnngQVe
gdc/view?usp=sharing
Lecture Notes
Unit IV
UNIT IV

Sl. No. Contents Page No.

1 Greedy Technique 22

2 Prim‘s algorithm 23

3 Kruskal’s Algorithm 26

4 Huffman Trees 31

5 Iterative Improvement 34

6 The Maximum-Flow Problem 35

7 Maximum Matching in Bipartite Graphs 43

8 The Stable marriage Problem 47


Unit IV GREEDY TECHNIQUE

4.1. Greedy Technique:


 The greedy approach suggests constructing a solution through a sequence of steps,
each expanding a partially constructed solution obtained so far, until a complete
solution to the problem is reached.
 On each step – and this is the central point of this technique. The choice made must
be
- Feasible, i.e., it has to satisfy the problem’s constraints.
- Locally optimal, i.e., it has to be the best local choice among all
feasible choices available on that step.
- Irrevocable, i.e., once made, it cannot be changed on subsequent
steps of the algorithm.
 Given n points, connect them in the cheapest possible way so that there will
be a path between every pair of points.
 Represent the points given by vertices of a graph, possible connections by the
graph’s edges, and the connection costs by the edge weights.

Spanning tree:
- A spanning tree of a connected graph acyclic subgraph (i.e., a tree) that contains
all the vertices of the graph.

Minimum spanning tree:


- A minimum spanning tree of a weighted connected graph is its spanning tree
of the smallest weight, where the weight of a tree is defined as the sum of the
weights on all its edges.
- Using exhaustive approach to construct a minimum spanning tree, it has two
serious obstacles.

Spanning tree
Unit IV GREEDY TECHNIQUE

- First, the number of spanning trees grows exponentially with the graph
size (at least for dense graphs).
- Second, generating all spanning trees for a given graph is not easy; in
fact, it is more difficult than finding a minimum spanning tree for a
weighted graph by using one of several efficient algorithms available
for this problem.

4.2. Prim’s algorithm:


 Prim’s algorithm constructs a minimum spanning tree through a
sequence of expanding subtrees.
 The initial subtree in such a sequence consists of a single vertex
selected arbitrarily from the set V of the graph’s vertices.
 On each iteration, we expand the current tree in the greedy manner
by simply attaching to it the nearest vertex not in that tree.
 The algorithm stops after all the graph’s vertices have been
included in the tree being constructed.

ALGORITHM Prim(G)
//Prim’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = {V, E}
//Output: ET, the set of edges composing a minimum spanning tree of G
VT←{v0} //the set of tree vertices can be initialized with any vertex ET←Φ
for i ←1 to |V| − 1 do
find a minimum-weight edge e∗ = (v∗, u∗) among all the edges (v, u) such
that v is in VT and u is in V– VT
VT←VT∪ {u*}
ET←ET∪ {e*}
return ET
Unit IV GREEDY TECHNIQUE

To find the shortest edge connecting the vertex to a tree vertex each vertex that are not
in the current tree adds two labels to a vertex:

 the name of the nearest tree vertex and

 the length (the weight) of the corresponding edge.

 Vertices that are not adjacent to any of the tree vertices can be given the ∞ label
indicating their “infinite” distance to the tree vertices and

 a null label for the name of the nearest tree vertex.

(Alternatively, split the vertices that are not in the tree into two sets, the “fringe” and
the “unseen.”

The fringe contains only the vertices that are not in the tree but are adjacent to at
least one tree vertex. These are the candidates from which the next tree vertex is
selected.

If a graph is represented by its adjacency lists and the priority queue is implemented as a
min-heap, the running time of the algorithm is O(|E| log |V

|) in a connected graph, where |V| − 1≤ |E|.

The unseen vertices are all the other vertices of the graph, called “unseen” because
they are yet to be affected by the algorithm.)
Rule:

After identifying a vertex u∗ to be added to the tree, perform two operations:

1.Move u∗ from the set V − VT to the set of tree vertices VT.

2.For each remaining vertex u in V − VT that is connected to u∗ by a shorter edge than


the u’s current distance label, update its labels by u∗ and the weight of the edge
between u∗ and u, respectively.
FIGURE: Application of Prim’s algorithm. The parenthesized labels of a vertex in the
middle column indicate the nearest tree vertex and edge weight; selected vertices
and edges are in bold.
Unit IV GREEDY TECHNIQUE

Analysis:

If a graph is represented by its adjacency lists and the priority queue is implemented
as a min-heap, the running time of the algorithm is in O(|E| log |V |). This is because
the algorithm performs |V |− 1 deletions of the smallest element and makes |E|
verifications and, possibly, changes of an element’s priority in a min-heap of size not
exceeding |V |. Each of these operations, as noted earlier, is a O (log |V |) operation.
Hence, the running time of this implementation of Prim’s algorithm is in
(|V |− 1 + |E|) O(log |V |) = O(|E| log |V |)

because, in a connected graph, |V |− 1 ≤ |E|.

4.3. Kruskal's algorithm:

Kruskal’s algorithm looks at a minimum spanning tree of a


weighted connected graph G= {V, E} as an acyclic subgraph with |V| − 1 edges for
which the sum of the edge weights is the smallest. the algorithm constructs a
minimum spanning tree as an expanding sequence of subgraphs that are always
acyclic but are not necessarily connected on the intermediate stages of the algorithm.
The algorithm begins by sorting the graph’s edges in nondecreasing
order of their weights. Then, starting with the empty subgraph, it scans this
sorted list, adding the next edge on the list to the current subgraph if such an inclusion
does not create a cycle and simply skipping the edge otherwise.

ALGORITHM Kruskal(G)
//Kruskal’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = ( V, E )
//Output: ET, the set of edges composing a minimum spanning tree of G
sort E in nondecreasing order of the edge weights
k←0
ET ← Φ; ecounter ← 0
while ecounter < |V| − 1 do
k←k+1
if ET ∪ {eik} is acyclic
ET ← ET ∪ {eik}; ecounter ← ecounter + 1
return ET
Unit IV GREEDY TECHNIQUE

The initial forest consists of |V | trivial trees, each comprising a single vertex of the
graph. The final forest consists of a single tree, which is a minimum spanning tree
of the graph.

On each iteration, the algorithm takes the next edge (u, v) from the sorted
list of the graph’s edges, finds the trees containing the vertices u and v, and, if
these trees are not the same, unites them in a larger tree by adding the edge
(u, v).
A new cycle is created if and only if the new edge connects two vertices
already connected by a path, i.e., if and only if the two vertices belong to the same
connected component (Figure below).

New edge connecting two vertices may (a) or may not (b) create a cycle.

Note: Each connected component of a subgraph generated by Kruskal’s algorithm


is a tree because it has no cycles.

Disjoint Subsets and Union-Find Algorithms

Kruskal’s algorithm is one of the applications that require a dynamic partition of


some n element set S into a collection of disjoint subsets S1, S2,. . ., Sk.

After being initialized as a collection of n one-element subsets, each containing a


different element of S, the collection is subjected to a sequence of intermixed union
and find operations.

The abstract data type of a collection of disjoint subsets of a finite set contains the
following operations:
Unit IV GREEDY TECHNIQUE
 makeset(x) creates a one-element set {x}. It is assumed that this operation can be
applied to each of the elements of set S only once.
 find(x) returns a subset containing x.

 union(x,y) constructs the union of the disjoint subsets Sx and Sy containing x and y,
respectively, and adds it to the collection to replace Sx and Sy , which are deleted from
it.

For example, let S = {1, 2, 3, 4, 5, 6}. Then makeset(i) creates the set {i} and
applying this operation six times initializes the structure to the collection of six singleton
sets:
{1}, {2}, {3}, {4}, {5}, {6}.

Performing union(1, 4) and union(5, 2) yields

{1, 4}, {5, 2}, {3}, {6},

and, if followed by union(4, 5) and then by union(3, 6), we end up with the disjoint
subsets

{1, 4, 5, 2}, {3, 6}.

The quick find, optimizes the time efficiency of the find operation;

The quick union, optimizes the union operation.

The quick find uses an array indexed by the elements of the underlying set S; the
array’s values indicate the representatives of the subsets containing those elements.
Each subset is implemented as a linked list whose header contains the pointers to the
first and last elements of the list along with the number of elements in the list.

The time efficiency of makeset(x) operation is in O(1), and hence the


initialization of n singleton subsets is in O(n).

[The implementation of makeset(x) requires assigning the corresponding element in the


representative array to x and initializing the corresponding linked list to a single node
with the x value]
Unit IV GREEDY TECHNIQUE
The efficiency of find(x) is also in O(1): ( retrieve the x’s representative in the

representative array)

Executing union(x, y) takes longer.

With this algorithm the sequence of union operations


union(2, 1), union(3, 2),..., union(i + 1, i),..., union(n, n − 1) runs in E>(n2) time

To improve the overall efficiency

Append the shorter of the two lists to the longer one with ties broken arbitrarily. The size of
each list is assumed to be available by storing the number of elements in the list’s header.
This modification is called the union by size.

The worst-case running time of union-by-size operations is in O(n log n)3

The quick union—the second principal alternative for implementing disjoint subsets—
represents each subset by a rooted tree. The nodes of the tree contain the subset’s
elements (one per node), with the root’s element considered the subset’s representative;
the tree’s edges are directed from children to their parents (Figure below).

To improve the time bound, always perform a union operation by attaching a smaller tree
to the root of a larger one, with ties broken arbitrarily. The size of a tree can be measured
either by the number of nodes (this version is called union by size) or by its height (this
version is called union by rank).
Unit IV GREEDY TECHNIQUE

FIGURE: Application of Kruskal’s algorithm. Selected edges are shown in bold.


Unit IV GREEDY TECHNIQUE
These options require storing, for each node of the tree, either the
number of node descendants or the height of the subtree rooted at that node,
respectively. One can easily prove that in either case the height of the tree will be
logarithmic, making it possible to execute each find in O(log n) time. Thus, for quick
union, the time efficiency of a sequence of at most n − 1 unions and m finds is in O(n +
m log n).

There are efficient algorithms to check for whether two vertices belong to the same
tree. They are called union- find algorithms. The running time of Kruskal’s algorithm
will be dominated by the time needed for sorting the edge weights of a given graph.

Hence, with an efficient sorting algorithm, the time efficiency of Kruskal’s algorithm will
be in O (|E| log E|).

4.4. Huffman’s algorithm:

Huffmann’s Algorithm constructs a tree that assigns shorter bit strings to high-frequency
symbols and longer ones to low-frequency symbols.

Step 1 Initialize n one-node trees and label them with the symbols of the
alphabet given. Record the frequency of each symbol in its tree’s root to
indicate the tree’s weight. (More generally, the weight of a tree will be equal to the
sum of the frequencies in the tree’s leaves.)

Step 2 Repeat the following operation until a single tree is obtained. Find two trees
with the smallest weight (ties can be broken arbitrarily, but see Problem 2 in this
section’s exercises). Make them the left and right subtree of a new tree and
record the sum of their weights in the root of the new tree as its weight.

A tree constructed by the above algorithm is called a Huffman tree. It defines in the
manner described above is called a Huffman code.

To encode a text that comprises symbols (from some n-symbol alphabet) by assigning to
each of the text’s symbols some sequence of bits (called the codeword), fixed-length
encoding is used.
Unit IV GREEDY TECHNIQUE
Fixed-length encoding assigns to each symbol a bit string of the same
length m (m ≥ log2 n).

Variable-length encoding assigns codewords of different lengths to


different symbols.

Problem with Variable-length encoding:

To find how many bits of an encoded text represent the first (or, more
generally, the ith) symbol, prefix-free (or simply prefix) codes are used.

In a prefix code, no codeword is a prefix of a codeword of another symbol.

Steps:

scan a bit string until the first group of bits that is a codeword for some
symbol is found,
replace these bits by this symbol, and
repeat this operation until the bit string’s end is reached

To create a binary prefix code for some alphabet, associate the alphabet’s
symbols with leaves of a binary tree in which

o all the left edges are labeled by 0 and


o all the right edges are labeled by 1.

The codeword of a symbol can then be obtained by recording the labels on


the simple path from the root to the symbol’s leaf.

Since there is no simple path to a leaf that continues to another leaf, no


codeword can be a prefix of another codeword; hence, any such tree yields
a prefix code.
Unit IV GREEDY TECHNIQUE
EXAMPLE Consider the five-symbol alphabet {A, B, C, D, _} with the following
occurrence frequencies in a text made up of these symbols: The Huffman tree
construction for this input is shown in the Figure.

symbol A B C D _
frequency 0.35 0.1 0.2 0.2 0.15

FIGURE: Example of constructing a Huffman coding tree. The resulting codewords are as follows:

symbol A B C D _
frequency 0.35 0.1 0.2 0.2 0.15
codeword 11 100 00 01 101

Hence, DAD is encoded as 011101, and 10011011011101 is decoded as BAD_AD. With


the occurrence frequencies given and the codeword lengths obtained, the average
number of bits per symbol in this code is
= 2 . 0.35 + 3 . 0.1+ 2 . 0.2 + 2 . 0.2 + 3 . 0.15 = 2.25.
Unit IV ITERATIVE IMPROVEMENT
We used a fixed-length encoding for the same alphabet, we would have to use
at least 3 bits per each symbol. Thus, for this toy example, Huffman’s code
achieves the compression ratio - a standard measure of a compression algorithm’s
effectiveness of (3− 2.25) / 3 ∙ 100% = 25%. In other words, Huffman’s encoding
of the text will use 25% less memory than its fixed-length encoding.
Running time is O(n log n), as each priority queue operation takes time
O(log n).

Applications of Huffman’s encoding


1. Huffman’s encoding is a variable length encoding, so that number of bits used
are lesser than fixed length encoding.
2. Huffman’s encoding is very useful for file compression.
3. Huffman’s code is used in transmission of data in an encoded format.
4. Huffman’s encoding is used in decision trees and game playing.

4.5. Introduction to Iterative Improvement:


It starts with some feasible solution (a solution that satisfies all the constraints
of the problem) and proceeds to improve it by repeated applications of some simple
step. This step typically involves a small, localized change yielding a feasible solution
with an improved value of the objective function. When no such change improves
the value of the objective function, the algorithm returns the last feasible solution as
optimal and stops.

Obstacles to Iterative improvement:

Finding an initial solution may require as much effort as solving the problem after
a feasible solution has been identified.

It is not always clear what changes should be allowed in a feasible solution so that we
can check efficiently whether the current solution is locally optimal and, if not, replace
it with a better one.
Unit IV ITERATIVE IMPROVEMENT

4.6. THE MAXIMUM-FLOW PROBLEM:


Maximum Flow Problem:

Problem of maximizing the flow of a material through a transportation


network (e.g., pipeline system, communications or transportation networks)

Formally represented by a connected weighted digraph with n vertices numbered from


1 to n with the following properties:

 Contains exactly one vertex with no entering edges, called the source (numbered 1)

 Contains exactly one vertex with no leaving edges, called the sink (numbered n)

 Has positive integer weight uij on each directed edge (i.j), called the edge capacity,
indicating the upper bound on the amount of the material that can be sent from i to j
through this edge.

 A digraph satisfying these properties is called a flow network or simply a

network.

 Example:

Node (1) = source , Node(6) = sink

Definition of a Flow:

A flow is an assignment of real numbers xij to edges (i,j) of a given network that satisfy
the following:

Flow-conservation requirements:

The total amount of material entering an intermediate vertex must be equal to the total
amount of the material leaving the vertex.
Unit IV ITERATIVE IMPROVEMENT
capacity constraints:
0 ≤ xij ≤ uij for every edge (i,j) € E
Flow value and Maximum Flow Problem:
Since no material can be lost or added to by going through intermediate vertices of the
network, the total amount of the material leaving the source must end up at the sink:

The value of the flow is defined as the total outflow from the source (= the total inflow
into the sink). The maximum flow problem is to find a flow of the largest value
(maximum flow) for a given network.

The maximum-flow problem can be stated formally as the following


optimization problem:

4.6.1 Augmenting Path (Ford-Fulkerson) Method:


 Start with the zero flow (xij = 0 for every edge).
 On each iteration, try to find a flow-augmenting path from source to sink, which a
path along which some additional flow can be sent.
 If a flow-augmenting path is found, adjust the flow along the edges of this path to get
a flow of increased value and try again.
 If no flow-augmenting path is found, the current flow is maximum.
Finding a flow-augmenting path:
To find a flow-augmenting path for a flow x, consider paths from source to sink in
the underlying undirected graph in which any two consecutive vertices i,j are either:
• connected by a directed edge (i to j) with some positive unused capacity
rij = uij – xij known as forward edge
Unit IV ITERATIVE IMPROVEMENT
connected by a directed edge (j to i) with positive flow xji known as backward
edge

If a flow-augmenting path is found, the current flow can be increased by r units by


increasing xij by r on each forward edge and decreasing xji by r on each backward
edge,where

r = min {rij on all forward edges, xji on all backward edges}

Assuming the edge capacities are integers, r is a positive integer

On each iteration, the flow value increases by at least 1

Maximum value is bounded by the sum of the capacities of the edges leaving the source;
hence the augmenting-path method has to stop after a finite number of iterations

The final flow is always maximum, its value doesn’t depend on a sequence of
augmenting paths used

Performance degeneration of the method


The augmenting-path method doesn’t prescribe a specific way for generating flow-
augmenting paths.

Selecting a bad sequence of augmenting paths could impact the method’s efficiency

EXAMPLE

 Let us assume that we identify the augmenting path 1→2→3→6 first.

Increase the flow along this path by a maximum of 2 units, which is the smallest unused
capacity of its edges. The new flow is shown below.
Unit IV ITERATIVE IMPROVEMENT

The above flow values can still be increased along the path 1→4→3←2→5→6 by
increasing the flow by 1 on edges (1, 4), (4, 3), (2, 5), and (5, 6) and decreasing it by 1 on
edge (2, 3).

4.6.2 Shortest-Augmenting-Path Algorithm:


Generate augmenting path with the least number of edges by BFS as follows.
Starting at the source, perform BFS traversal by marking new (unlabeled) vertices with two
labels:
first label – indicates the amount of additional flow that can be brought from
the source to the vertex being labeled

second label – indicates the vertex from which the vertex being labeled was
reached, with “+” or “–” added to the second label to indicate whether the vertex
was reached via a forward or backward edge

Vertex labelling:
The source is always labeled with ∞, -
All other vertices are labeled as follows:
If unlabeled vertex j is connected to the front vertex i of the traversal queue by a
directed edge from i to j with positive unused capacity rij = uij –xij (forward edge),
vertex j is labeled with lj,i+, where lj = min{li, rij}
Unit IV ITERATIVE IMPROVEMENT
If unlabeled vertex j is connected to the front vertex i of the traversal queue y a
directed edge from j to i with positive flow xji, then vertex j is labeled with lj, i− ,
where lj = min{li, xji}.

If the sink ends up being labeled, the current flow can be augmented by the amount
indicated by the sink’s first label.

The augmentation of the current flow is performed along the augmenting path traced by
following the vertex second labels from sink to source; the current flow quantities are
increased on the forward edges and decreased on the backward edges of this path.

If the sink remains unlabeled after the traversal queue becomes empty, the algorithm
returns the current flow as maximum and stops.

ALGORITHM ShortestAugmentingPath(G)
//Implements the shortest-augmenting-path algorithm
//Input: A network with single source 1, single sink n, and
// positive integer capacities uij on its edges (i, j )
//Output: A maximum flow x
assign xij = 0 to every edge (i, j ) in the network
label the source with∞, − and add the source to the empty queue Q
while not Empty(Q) do
i ←Front(Q); Dequeue(Q)
for every edge from i to j do //forward edges
if j is unlabeled
rij ←uij − xij
if rij > 0
lj ←min{li, rij }; label j with lj, i +
Enqueue(Q, j )
for every edge from j to i do //backward edges
if j is unlabeled
if xji > 0
lj ←min{li, xji }; label j with lj, i −
Enqueue(Q, j )
Unit IV ITERATIVE IMPROVEMENT
if the sink has been labeled
//augment along the augmenting path found
j ←n //start at the sink and move backwards using second labels
while j ≠ 1 //the source hasn’t been reached
if the second label of vertex j is i +
xij ←xij + ln
else //the second label of vertex j is i −
xji ←xji − ln
j ←i; i ←the vertex indicated by i’s second label
erase all vertex labels except the ones of the source
reinitialize Q with the source
return x //the current flow is maximum

Example

1. Initialize all the edges as current flow/capacity


2. Label source and add to Queue
3. Label all forward edges then backward edges
4. If sink has been labeled , find augment path and adjust the flow on that path.
5. Re initialize Queue with source
6. Repeat steps 3 to 5 until there is a path from source to sink.

Find the Maximum flow in the following Graph.


Unit IV ITERATIVE IMPROVEMENT
Unit IV ITERATIVE IMPROVEMENT
A cut induced by partitioning vertices of a network into some subset X containing

the source and . X, the complement of X, containing the sink is the set of all the edges

with a tail in X and a head in .X .We denote a cut C(X, . X) or simply C. For example, for the
network in the previous examples are:

if X = {1} and hence .X = {2, 3, 4, 5, 6}, C(X, . X) = {(1, 2), (1, 4)};
if X = {1, 2, 3, 4, 5} and hence .X = {6}, C(X, . X) = {(3, 6), (5, 6)};
if X = {1, 2, 4} and hence .X = {3, 5, 6}, C(X, . X) = {(2, 3), (2, 5), (4, 3)}.
The name “cut” stems from the following property: if all the edges of a cut were deleted
from the network, there would be no directed path from source to sink.

The capacity of a cut is defined as the sum of


capacities of the edges that compose the cut. For the three examples of cuts given
above, the capacities are equal to 5, 6, and 9, respectively.

Max-Flow Min-Cut Theorem:

The value of maximum flow in a network is equal to the capacity of its minimum cut
The shortest augmenting path algorithm yields both a maximum flow and a minimum
cut:
Maximum flow is the final flow produced by the algorithm
Minimum cut is formed by all the edges from the labeled vertices to unlabeled
vertices on the last iteration of the algorithm.

All the edges from the labeled to unlabeled vertices are full, i.e., their flow amounts
are equal to the edge capacities, while all the edges from the unlabeled to labeled
vertices, if any, have zero flow amounts on them.

Time Efficiency:

The number of augmenting paths needed by the shortest-augmenting-path algorithm never


exceeds nm/2, where n and m are the number of vertices and edges, respectively.

Since the time required to find shortest augmenting path by breadth-first search is in
O(n+m)=O(m) for networks represented by their adjacency lists, the time efficiency of
the shortest-augmenting-path algorithm is in O(nm2) for this representation.

More efficient algorithms have been found that can run in close to O(nm) time, but these
algorithms don’t fall into the iterative-improvement paradigm.
Unit IV ITERATIVE IMPROVEMENT

4.7. MAXIMUM MATCHING IN BIPARTITE GRAPHS:


A matching in a graph is a subset of its edges with the property that no two
Edges share vertex. A maximum matching—more precisely, a maximum Cardinality
matching—is a matching with the largest number of edges.

Bipartite Graphs:

Bipartite graph: a graph whose vertices can be partitioned into two disjoint sets V and U,

not necessarily of the same size, so that every edge connects a vertex in V to a vertex in U.

A graph is bipartite if its vertices can be colored in two colors so that every edge has its

vertices colored in different colors; such graphs are also said to be 2-colorable

Example

By applying Iterative improvement technique to maximum cardinality matching problem for


the following graph.

For a given matching M, a vertex is called free (or unmatched) if it is not an end point of
any edge in M; otherwise, a vertex is said to be matched
If every vertex is matched, then M is a maximum matching
If there are unmatched or free vertices, then M may be able to be improved
We can immediately increase a matching by adding an edge
connecting two free vertices (e.g., (1,6) above)

Matched vertex = 4, 5, 8, 9. Free vertex = 1, 2, 3, 6, 7, 10.


Unit IV ITERATIVE IMPROVEMENT

In general, we increase the size of a current matching M by constructing a simple


path from a free vertex in V to a free vertex in U whose edges are alternately in E −M and
in M. That is, the first edge of the path does not belong to M, the second one does, and so
on, until the last edge that does not belong to M. Such a path is called augmenting with
respect to the matching M.

Augmenting Paths and Augmentation:

An augmenting path for a matching M is a path from a free vertex in V to a free vertex in U
whose edges alternate between edges not in M and edges in M The length of an
augmenting path is always odd

Adding to M the odd numbered path edges and deleting from it the even numbered path
edges increases the matching size by 1 (augmentation)

One-edge path between two free vertices is special case of augmenting path

General method for constructing a maximum matching by augmentation method


•Start with some initial matching (e.g., the empty set). Find an augmenting path and
augment the current matching along this path.
•When no augmenting path can be found, terminate the algorithm and return the last
matching, which is maximum.

Augmenting path algorithm for a matching M by using BFS-traversal:


Case 1 (the queue’s front vertex w is in V ) If u is a free vertex adjacent to w, it is used
as the other endpoint of an augmenting path; so the labeling stops and augmentation
of the matching commences. The augmenting path in question is obtained by moving
backward along the vertex labels (see below) to alternately add and delete its edges to and
from the current matching. If u is not free and connected to w by an edge not in M, label u
with w unless it has been already labeled.

Case 2 (the front vertex w is in U) In this case, w must be matched and we label its
mate in V with w.
Unit IV ITERATIVE IMPROVEMENT

ALGORITHM MaximumBipartiteMatching(G)
//Finds a maximum matching in a bipartite graph by a BFS-like traversal
//Input: A bipartite graph G = V, U, E
//Output: A maximum-cardinality matching M in the input graph
initialize set M of edges with some valid matching (e.g., the empty set)
initialize queue Q with all the free vertices in V (in any order)

while not Empty(Q) do


w←Front(Q); Dequeue(Q)
if w ∈ V
for every vertex u adjacent to w do
if u is free //augment
M ←M 𝖴 (w, u)
v←w
while v is labeled do
u←vertex indicated by v’s label; M ←M − (v, u)
v←vertex indicated by u’s label; M ←M 𝖴 (v, u)
remove all vertex labels
reinitialize Q with all free vertices in V
break //exit the for loop
else //u is matched

if (w, u) M and u is unlabeled


label u with w
Enqueue(Q, u)

else //w ∈ U (and matched)


label the mate v of w with w
Enqueue(Q, v)

return M //current matching is maximum


Unit IV ITERATIVE IMPROVEMENT

Example: Find the maximum matching in the


following bipartite graph.
Unit IV ITERATIVE IMPROVEMENT
Efficiency
The time spent on each iteration is in O(n + m), where m = |E| is the number of
edges in the graph and n = |V | + |U| is the number of vertices in the graph.

4.8. The Stable Marriage Problem:

There is a set Y = {m1,…, mn} of n men and a set X = {w1,…, wn} of n women.
Each man has a ranking list of the women, and each woman has a ranking list of the
men (with no ties in these lists).

A marriage matching M is a set of n (m, w) pairs whose members are selected from
disjoint n-element sets Y and X in a one-one fashion, i.e., each man m from Y is
paired with exactly one woman w from X and vice versa.

A pair (m, w), where m ∈ Y, w ∈ X, is said to be a blocking pair for a marriage


matching M if man m and woman w are not matched in M but they prefer each
other to their mates in M.

A marriage matching M is called stable if there is no blocking pair for it; otherwise,
M is called unstable.

The stable marriage problem is to find a stable marriage matching for men’s and
women’s given preferences.
Stable marriage algorithm or Gale shaple Algorithm
Input: A set of n men and a set of n women along with rankings of the women
by each man and rankings of the men by each woman with no ties allowed in the
rankings
Output: A stable marriage matching
Step 0 Start with all the men and women being free.
Step 1 While there are free men, arbitrarily select one of them and do the
following:
Proposal: The selected free man m proposes to w, the next woman on his
preference list (who is the highest-ranked woman who has not rejected him before).
Unit IV ITERATIVE IMPROVEMENT

Response: If w is free, she accepts the proposal to be matched with m. If she is


not free, she compares m with her current mate. If she prefersm to him, she
accepts m’s proposal, making her former mate free; otherwise, she simply
rejects m’s proposal, leaving m free.

Step 2 Return the set of n matched pairs.

Example: Find the stable matching using the following preferences


Unit IV ITERATIVE IMPROVEMENT

Analysis of the Gale-Shapley Algorithm


The algorithm terminates after no more than n2 iterations with a stable marriage
output.

The stable matching produced by the algorithm is always man-optimal: each man
gets the highest rank woman on his list under any stable marriage. One can obtain
the woman- optimal matching by making women propose to men.

A man (woman) optimal matching is unique for a given set of participant


preferences.

The stable marriage problem has practical applications such as matching medical-
school graduates with hospitals for residency training.

Maximum Matching in Bi-partite Graph

Stable Marriage
Assignment
ASSIGNMENT

1. Apply the shortest-augmenting path algorithm to find a maximum


flow and a minimum cut in the following networks.

2. Apply the Gale Shapley algorithm and find a stable marriage


matching for the instance defined by the following ranking matrix

Preferences of α, β, γ and δ

Preferences of A, B, C and D
ASSIGNMENT

3. Construct a Huffman tree

4. Use the maximum matching algorithm to the following


Bipartite graph and find the maximum matching.

5. Use the maximum matching algorithm to the following


Bipartite graph and find the maximum matching.
Part A – Q & A
Unit - 4
Part - A
1. What is greedy method? (CO4,K1)
• The greedy method is the most straight forward design, which is applied for change
making problem.
• The greedy technique suggests constructing a solution to an optimization problem
through a sequence of steps, each expanding a partially constructed solution obtained
so far, until a complete solution to the problem is reached.
• On each step, the choice made must be feasible, locally optimal and irrevocable.

2. List the advantage of greedy algorithm. (CO4,K1)

o Greedy algorithm produces a feasible solution

o Greedy method is very simple to solve a problem

o Greedy method provides an optimal solution directly.

3. What is Minimum Cost Spanning Tree? (CO4,K1)


A minimum cost spanning tree of a weighted connected graph is its spanning tree of
the smallest weight, where the weight of the tree is defined as the sum of the weights on
all its edges.

4. Define prim’s algorithm. (CO4,K1)


Prim’s algorithm is a greedy and efficient algorithm, which is used to find the minimum
spanning tree of a weighted connected graph.

5. Define Kruskal’s algorithm. (CO4,K1)


Kruskal’s algorithm is another greedy algorithm for the minimum spanning tree problem.
Kruskal’s algorithm constructs a minimum spanning tree by selecting edges in increasing
order of their weights provided that the inclusion does not create a cycle. Kruskals
algorithm provides a optimal solution.
Part - A
6. Define Union by rank and Union by Size. (CO4, K1)
Attaching a smaller tree to the root of a larger one, with ties broken arbitrarily is called
as union. The size of a tree can be measured either by the number of node. This version
is called union by size. Union is by its height and this version is called union by rank.
7. Define Huffman trees? (CO4, K1)
A Huffman tree is binary tree that minimizes the weighted path length from the root to
the leaves containing a set of predefined weights. The most important application of
Huffman trees are Huffman code.

8. What do you mean by Huffman code? (CO4, K1)


A Huffman code is a optimal prefix tree variable length encoding scheme that assigns bit
strings to characters based on their frequencies in a given text.

9. What is meant by compression ratio? (CO4, K2)


Huffman’s code achieves the compression ratio, which is a standard measure of
compression algorithm’s effectiveness of
(3-2.25)/3*100 = 0.75/3*100
= 0.25 *100
= 25%.

10. List the advantage of Huffman’s encoding? (CO4, K1)


a. Huffman’s encoding is one of the most important file compression methods.
b. It is simple
c. It is versatility
It provides optimal and minimum length encoding.

11. What is dynamic Huffman encoding? (CO4, K1)


In dynamic Huffman encoding, the coding tree is updated each time a new character is
read from the source text. Dynamic Huffman encoding is used to overcome the
drawback of simplest version.
Part - A
12. What is an iterative improvement? (CO4, K1)

The iterative-improvement technique involves finding a solution to an optimization


problem by generating a sequence of feasible solutions with improving values
of the problem’s objective function. Each subsequent solution in such a sequence
typically involves a small, localized change in the previous feasible solution. When
no such change improves the value of the objective function, the algorithm returns
the last feasible solution as optimal and stops.

13. What are the problems solved by iterative improvement? (CO4, K1)
Important problems that can be solved exactly by iterative- improvement
algorithms include linear programming, maximizing the flow in a network, and
matching the maximum possible number of vertices in a graph.

14. What are the limitations of iterative improvement? (CO4, K1)

Need of initial feasible solution.

what changes should be allowed in a feasible solution so that we can check


efficiently whether the current solution is locally optimal and, if not, replace it with a
better one.

An issue of local versus global extremum (maximum or minimum).

15. What is perfect matching? (CO4, K1)

If all the vertices in the v set is matched with vertices in the u set with one-to-one
mapping, then it is called as perfect matching.
Part - A
16. What is Ford-Fulkerson method and shortest augmenting-path method?
(CO4, K1)

The Ford-Fulkerson method is a classic template for solving the maximum flow
problem by the iterative-improvement approach.

The shortest augmenting-path method implements this idea by labeling network


vertices in the breadth-first search manner.

The Ford-Fulkerson method also finds a minimum cut in a given network.

17. Define flow and flow conservation requirement. (CO4, K1)

A flow is an assignment of real numbers xij to edges (i,j) of a given network that
satisfy the following:

Flow-conservation requirements: The total amount of material entering an


intermediate vertex must be equal to the total amount of the material leaving the
vertex.

18. What do you mean by the value of the flow in max flow problem?
(K1,CO4)

The total outflow from the source is equivalent to the total inflow into the sink is

called the value of the flow.

19. What is source and sink vertex? (CO4, K1)

A vertex with no entering edges is called the source and a vertex with no leaving
edges is called the sink.

20. State max – flow – min – cut theorem. (CO4, K1)

The value of maximum flow in a network is equal to the capacity of its minimum
cut.
Part - A
21. What is cut and min cut? (CO4, K1)

Let X be a set of vertices in a network that includes its source but does not include its
sink, and let X, the complement of X, be the rest of the vertices including the sink.
The cut induced by this partition of the vertices is the set of all the edges with a tail
in X and a head in X.

22. What is pre flow? (CO4, K1)

Pre flow is a flow that satisfies the capacity constraints but not the flow-
conservation requirement.

23. Define Bipartite Graphs. (CO4, K1)

A graph whose vertices can be partitioned into two disjoint sets V and U, not
necessarily of the same size, so that every edge connects a vertex in V to a vertex in
U. A graph is bipartite if and only if it does not have a cycle of an odd length .

24. What is matching and maximum matching in bipartite graph? (CO4, K1)

A matching in a graph is a subset of its edges with the property that no two
edges share a vertex.

A maximum cardinality matching is the largest subset of edges in a graph such that
no two edges share the same vertex.

25. What is augmentation and augmentation path? (CO4, K1)

An augmenting path for a matching M is a path from a free vertex in V to a free


vertex in U whose edges alternate between edges not in M and edges in M.

The length of an augmenting path is always odd.

Adding to M the odd numbered path edges and deleting from it the even
numbered path edges increases the matching size by 1 (augmentation).
Part - A
One-edge path between two free vertices is special case of augmenting path.

26. What is 2-colorable graph? (CO4, K1)

A graph is bipartite if its vertices can be colored in two colors so that every edge
has its vertices colored in different colors; such graphs are also said to be 2-
colorable.

27. What do you mean by stable marriage problem? (CO4, K1)

The stable marriage problem is to find a stable matching for elements of two n

element sets (men’s and women’s) based on given matching preferences. This
problem always has a solution that can be found by the Gale-Shapley algorithm.

(or) The stable marriage problem is to find a stable marriage matching for men’s
and women’s given preferences.

28. What is marriage matching? (CO4, K1)

A marriage matching(M) is a set of n (m, w) pairs whose members are selected


from disjoint n-element sets Y and X in a one-one fashion, i.e., each man m from Y
is paired with exactly one woman w from X and vice versa.

29. What do you mean by stable and unstable matching? (CO4, K1)

A marriage matching(M) is called stable if there is no blocking pair for it;


otherwise, M is called unstable.

30. What is man-optimal matching? (CO4, K1)

A man-optimal: it assigns to each man the highest-ranked woman possible under


any stable marriage.
Part - B Questions
Unit IV
Part - B Questions
1. Explain the maximum flow problem in detail. (CO4, K4)

2. Explain about Prims Algorithm and Kruskal’s Algorithm. (CO4, K4)

3. Explain first-scanned and first-labelling algorithm in detail. (CO4, K4)

4. (i).Define Huffman tree? List the types of Encoding in Huffman tree? (CO4, K4)
(ii).Write the procedure to compute Huffman code.

5. Write the Huffman’s Algorithm. Construct the Huffman’s tree for the (CO4, K4)
following data and obtain its Huffman’s Code.

Encode the characters “BAD”.


Decode the bit string 001110010110011100.
Find the Compression ratio.
6. Apply the shortest-augmenting path algorithm to find a maximum flow and a
minimum cut in the following networks. (CO4, K4)

a)

b)
Part - B Questions
7. Apply the maximum-matching algorithm to the following bipartite graph. (K3,CO4)

8. Consider an instance of the stable marriage problem given by the following


ranking matrix

For each of its marriage matching's, indicate whether it is stable or not. (K3,CO4)

9. Explain Gale-Shapley Algorithm with an example. (K2,CO4)

10. Discuss the stable marriage problem with an example. (K2,CO4)


11. Find a stable marriage matching for the instance defined by the following
ranking matrix. (K3,CO4)
Supportive Online
Certification
Courses
(NPTEL, Swayam,
Coursera, Udemy,
etc.,)
SUPPORTIVE ONLINE CERTIFICATION COURSES

Sl. Courses Platform


No.
1 Design and analysis of algorithms NPTEL
2 The Design and Analysis of Algorithm Udemy
3 Algorithms Specialization Coursera
4 Algorithm Design and Analysis edX
Real time
applications in day
to day life and to
Industry
REAL TIME EXAMPLES

Real time applications of Bi-partite

Graph Movies preferences:

In 2009 Netflix gave a $1Million prize to the group that was best able to
predict how much someone would enjoy a movie based on their preferences. This
can be viewed, and in the submissions often was, as a bipartite graph problem. The
viewers are the vertices U and the movies the vertices V and there is an edge from
u to v if u viewed v. In this case the edges are weighted by the rating the viewer
gave. The winner was algorithm called “BellKor’s Pragmatic Chaos”. In the real
problem they also had temporal information about when a person made a rating, and
this turned out to help.

Error correcting codes:

In low density parity check (LDPC) codes the vertices U are bits of
information that need to be preserved and corrected if corrupted, and the vertices V
are parity checks. By using the parity checks errors can be corrected if some of the
bits are corrupted. LDPC codes are used in satellite TV transmission, the relatively
new 10G Ethernet standard, and part of the WiFi 802.11 standard.

Applications of Max flow


Content beyond
syllabus
CONTENT BEYOND SYLLABUS

Linear Programming:

Linear programming problem (LPP) is to optimize a linear function of several variables,


subject to linear constraints:

maximize (or minimize) c1 x1 + ...+ cn xn


subject to ai 1x1+ ...+ ain xn ≤ (or ≥ or =) bi , i = 1,...,m x1 ≥ 0, ... , xn ≥ 0
The function z = c1 x1 + ...+ cn xn is called the objective function;
constraints x1 ≥ 0, ... , xn ≥ 0 are called nonnegativity constraints

The Simplex Method


The classic method for solving LP problems; one of the most important algorithms
ever invented. Invented by George Dantzig in 1947,Based on the iterative improvement idea.

Generates a sequence of adjacent points of the problem’s feasible region with improving
values of the objective function until no further improvement is possible.

Standard form of LP problem


 Must be a maximization problem.
 All constraints (except the nonnegativity constraints) must be in the form of linear
equations.
 All the variables must be required to be nonnegative.
 Thus, the general linear programming problem in standard form with m constraints
and n unknowns (n ≥ m) is

Any linear programming problem can be transformed into an equivalent problem in


standard form. If an objective function needs to be minimized, it can be replaced by the
equivalent problem of maximizing the same objective function with all its coefficients cj
replaced by −cj , j = 1, 2, . . . , n

If a constraint is given as an inequality, it can be replaced by an equivalent equation


by adding a slack variable representing the difference between the two sides of the original
inequality.
CONTENT BEYOND SYLLABUS
If a constraint is given as an inequality, it can be replaced by an equivalent equation by
adding a slack variable representing the difference between the two sides of the original
inequality.
• The standard form of the LPP is written as
maximize 3x + 5y + 0u + 0v subject to x + y
+u=4
x + 3y+ + v = 6 x, y,
u, v ≥ 0.

Variables u and v, transforming inequality constraints into equality constrains, are called
slack variables

• Basic feasible solutions

A basic solution to a system of m linear equations in n unknowns (n ≥ m) is obtained


by setting n – m variables to 0 and solving the resulting system to get the values of the other
m variables. The variables set to 0 are called nonbasic; the variables obtained by solving the
system are called basic.

A basic solution is called feasible if all its (basic) variables are nonnegative.
The simplex method progresses through a series of adjacent extreme points
(basic feasible solutions) with increasing values of the objective function. Each such point can
be represented by a simplex tableau, a table storing the information about the basic feasible
solution corresponding to the extreme point.

In general, a simplex tableau for a linear programming problem in standard form


with n unknowns and m linear equality constraints (n ≥ m) has m + 1 rows and n + 1
columns. Each of the first m rows of the table contains the coefficients of a corresponding
constraint equation, with the last column’s entry containing the equation’s right-hand side.
The columns, except the last one, are labeled by the names of the variables. The rows
are labeled by the basic variables of the basic feasible solution the tableau represents; the
values of the basic variables of this solution are in the last column. Also note that the columns
labeled by the basic variables form the m × m identity matrix.

The last row of a simplex tableau is called the objective row. It is initialized by
the coefficients of the objective function with their signs reversed (in the first n columns)
and the value of the objective function at the initial point.
CONTENT BEYOND SYLLABUS
For example, the simplex tableau for (0, 0, 4, 6) of the above problem is presented
below:
Simplex tableau 1:

Entering Variable:
A new basic variable is called the entering variable(choose the one which
has most negative value in the objective row), while its column is referred to as the
pivot column; we mark the pivot column by ↑.
Departing variable :
For each positive entry in the pivot column, compute the θ-ratio by
dividing the row’s last entry by the entry in the pivot column. For the example of
tableau (1), these θ-ratios are

The row with the smallest θ-ratio determines the departing variable and mark the
row of the departing variable, called the pivot row, by←−
Pivoting:
The transformation of a current tableau into the next simplex tableau is
called pivoting.
•First, divide all the entries of the pivot row by the pivot, its entry in the pivot
column, to obtain rownew.
Unit IV ITERATIVE IMPROVEMENT
Then, replace each of the other rows, including the objective row, by the difference

where c is the row’s entry in the pivot column


For simplex tableau 1, the row are updated as

The next simplex tableau becomes as

Simplex tableau 2:

Since objective function is not met, construct the next simplex tableau by using the same
procedure discussed earlier.
Simplex tableau 3:

Since all the entries in the objective row are positive , the
algorithm terminates.

Solution:
x=3, y=1 and maximum value of 14.
Assessment
Schedule
(Proposed Date &
Actual Date)
Assessment Schedule

Assessment Proposed Date Actual Date Course Outcome Program


Tool Outcome
(Filled Gap)
Assessment I 13.09.2023 CO1, CO2

Assessment II 30.10.2023 CO3, CO4


Model CO1, CO2, CO3,
21.11.2023
CO4, CO5
Prescribed Text
Books and
Reference Books
TEXT & REFERENCE BOOKS

Sl. Book Name & Author Book


No.
1 Anany Levitin, ―Introduction to the Design and Analysis of Text Book
Algorithms‖, Third Edition, Pearson Education, 2012.
2 Ellis Horowitz, Sartaj Sahni and Sanguthevar Rajasekaran, Text Book
Computer Algorithms/ C++, Second Edition, Universities Press,
2019
3 Thomas H.Cormen, Charles E.Leiserson, Ronald L. Rivest and Reference
Clifford Stein, ―Introduction to Algorithms‖, Third Edition, PHI Book
Learning Private Limited, 2012.
4 Alfred V. Aho, John E. Hopcroft and Jeffrey D. Ullman, ―Data Reference
Structures and Algorithms‖, Pearson Education, Reprint 2006. Book
5 Harsh Bhasin, ―Algorithms Design and Analysis‖, Oxford Reference
university press, 2016 Book
6 S. Sridhar, ―Design and Analysis of Algorithms‖, Oxford Reference
university press, 2014. Book
7 http://nptel.ac.in/ Reference
Book
8 https://doc.lagout.org/science/0_Computer%20Science/2_Algor E-Book
ithms/Introduction%20to%20the%20Design%20and%20Analy
sis%20of%20Algorithms%20%283rd%20ed.%29%20%5BLevit
in%202011-10-09%5D.pdf
Mini Projects
Suggestions
MINI PROJECT SUGGESTIONS
1. Baseball Elimination Problem

Given the standings in a sports league at some point during the season,
determine which teams have been mathematically eliminated from winning their
division. In the baseball elimination problem, there is a league consisting of N
teams. At some point during the season, team i has w[i] wins and g[i][j] games
left to play against team j. A team is eliminated if it cannot possibly finish the
season in first place or tied for first place. The goal is to determine exactly which
teams are eliminated. Design and implement the above problem and find the
efficiency of the same.

2. Student-Project Allocation problem (SPA)

In many university departments, students seek to undertake a project in a given


field of speciality as part of the upper level of their degree programme. Typically a
wide range of available projects is offered, and usually the total number of project
places exceeds the number of students, to provide something of a choice. Also,
typically each lecturer will offer a variety of projects, but does not necessarily
expect that all will be taken up. Each student has preferences over the available
projects that he/she finds acceptable, whilst a lecturer will normally have
preferences over the students that he/she is willing to supervise. There may also
be upper bounds on the number of students that can be assigned to a particular
project, and the number of students that a given lecturer is willing to supervise.

Implement the problem of allocating students to projects based on these


preference lists and capacity constraints and find the efficiency of the same.
3. Transportation Problem

Consider a situation illustrated in Figure where we have inventory at warehouses to


be transported to retail stores. Each circle in the left column represents a warehouse,
while each circle on the right column represents a store. Each warehouse holds a
certain supply of inventory, given by the number written in its corresponding circle.
Each store demands a certain amount of inventory, given by the number written in its
circle. Each line represents a route through which inventory can be transported from a
warehouse to a store and is marked with a number that indicates per unit inventory
transportation cost. The units are indivisible. The goal is to satisfy demands while
minimizing transportation costs.

Warehouses Store
Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.

You might also like