m3 Notes
m3 Notes
N
Nehru Nagar Post, Puttur, D.K. 574203
Lecture Notes on
21CS42
Design and Analysis
lysis of
Algorithms
Modu
Module-3 : Greedy Method
Contents
1. Introduction to Greedy
reedy method
1.1 General method
The greedy method is thee straight forward design technique applica
icable to variety of
applications.
The greedy approach sugges ests constructing a solution through a sequen
uence of steps, each
expanding a partially constru
tructed solution obtained so far, until a compl
plete solution to the
problem is reached. On each step
s the choice made must be:
• feasible, i.e., it has too satisfy
s the problem’s constraints
• locally optimal, i.e.,., iti has to be the best local choice among aall feasible choices
available on that step
• irrevocable, i.e., once nce made, it cannot be changed on subseq equent steps of the
algorithm
As a rule, greedy algorithmss are
a both intuitively appealing and simple. Giv
iven an optimization
problem, it is usually easy to figure out how to proceed in a greedy man anner, possibly after
considering a few small instan
tances of the problem. What is usually moree difficult
d is to prove
that a greedy algorithm yields
ds an optimal solution (when it does).
Solution for coin change proroblem using greedy algorithm is very intui
tuitive and called as
cashier’s algorithm. Basic principle
pri is: At every iteration for search of
o a coin, take the
largest coin which can fit into
in remain amount to be changed at thatt particular
p time. At
the end you will have optimal
al solution.
Analysis:
Disregarding the time to initia
tially sort the object, each of the above strategie
gies use O(n) time,
Note: The greedy approachh to solve this problem does not necessarily
ily yield an optimal
solution
High lev
evel description of job sequencing algorithm
Analysis:
Analysis
Correctness
Prim’s algorithm always yield
lds a minimum spanning tree.
Analysis of Efficiency
The efficiency of Prim’s algor
orithm depends on the data structures chosenn for
f the graph itself
and for the priority queue of the set V − VT whose vertex priorities aree the
t distances to the
nearest tree vertices.
1. If a graph is represente
nted by its weight matrix and the priority queueue is implemented
as an unordered arra Θ 2). Indeed, on
ray, the algorithm’s running time will be in Θ(|V|
each of the |V| − 1itera
erations, the array implementing the priority qu
queue is traversed to
find and delete the minimum
m and then to update, if necessary,, the
th priorities of the
remaining vertices.
We can implement the priorit
rity queue as a min-heap. (A min-heap is a co complete binary tree
in which every element is less
ess than or equal to its children.) Deletion of the
th smallest element
from and insertion of a new element
el into a min-heap of size n are O(log n) operations.
2. If a graph is represente
nted by its adjacency lists and the priority que
ueue is implemented
as a min-heap, the run
unning time of the algorithm is in O(|E| log |V
V |).
This is because the algorithmm performs |V| − 1 deletions of the smallestt eelement and makes
|E| verifications and, possibly
bly, changes of an element’s priority in a mi min-heap of size not
exceeding |V|. Each of thesee operations,
o as noted earlier, is a O(log |V|) operation.
op Hence, the
running time of this implemenentation of Prim’s algorithm is in
(|V| − 1+ |E|) O (log |V |) = O(|E| log |V |) because, in a connected grap
aph, |V| − 1≤ |E|.
Working
The algorithm begins by sortirting the graph's edges in non decreasing orde rder of their weights.
Then, starting with the empty ty sub graph, it scans this sorted list adding the
th next edge on the
list to the current sub graph if such an inclusion does not create a cycle and
an simply skipping
the edge otherwise.
We can consider the algorith rithm's operations as a progression through a series of forests
containing all the vertices off a given graph and some of its edges. The initia
itial forest consists of
|V| trivial trees, each comprisi
ising a single vertex of the graph. The finall forest
f consists of a
single tree, which is a min inimum spanning tree of the graph. On eeach iteration, the
algorithm takes the next edge ge (u, v) from the sorted list of the graph's edges,
ed finds the trees
containing the vertices u andd v, and, if these trees are not the same, uniteites them in a larger
tree by adding the edge (u, v).).
Analysis of Efficiency
The crucial check whether two
wo vertices belong to the same tree can be foun
und out using union-
find algorithms.
Efficiency of Kruskal’s algori
orithm is based on the time needed for sorting ng the edge weights
of a given graph. Hence, withith an efficient sorting algorithm, the time effic
fficiency of Kruskal's
algorithm will be in O (|E| log
og |E|).
Illustration
An example of Kruskal’s alg lgorithm is shown below. The
selected edges are shown in bold.
bo
The pseudocode of Dijkstra tra’s algorithm is given below. Note that at in the following
pseudocode, VT contains a giv iven source vertex and the fringe contains thee vvertices adjacent to
it after iteration 0 is completed
ted.
Analysis:
The time efficiency of Dijk ijkstra’s algorithm depends on the data structures
st used for
implementing the priority queue
qu and for representing an input graphh itself. For graphs
represented by their adjacency
cy lists and the priority queue implemented as a min-heap, it is in
O ( |E| log |V| )
Applications
Transportation plannin
ing and packet routing in communication netwtworks, including the
Internet
Finding shortest paths
hs in social networks, speech recognition, doc
ocument formatting,
robotics, compilers, and
an airline crew scheduling.
Properties of Heap
o essentially complete binary tree with n nodes.
1. There exists exactly one n Its height is
equal to
2. The root of a heap alw
lways contains its largest element.
Illustration
Bottom-up construction of a heap
h for the list 2, 9, 7, 6, 5, 8. The double headed
he arrows show
key comparisons verifying the
he parental dominance.
Illustration
Analysis of efficiency:
Since we already know thatt the i in O(n), we have
th heap construction stage of the algorithm is
to investigate just the timee efficiency of the second stage. For the number of key
comparisons, C(n), needed for eliminating the root keys from the heaps of
o diminishing sizes
from n to 2, we get the followi
wing inequality:
For both stages, we get O(n)) + O(n log n) = O(n log n).
A more detailed analysis show
ows that the time efficiency of heapsort is, in fact, in Θ(n log n)
in both the worst and average
age cases. Thus, heapsort’s time efficiency fall
alls in the same class
as that of mergesort.
Unlike the latter, heapsort is in-place, i.e., it does not require any extr
xtra storage. Timing
experiments on random filess show
s that heapsort runs more slowly than quicksort
qu but can be
competitive with mergesort.
*****