0% found this document useful (0 votes)
21 views24 pages

m3 Notes

The document contains lecture notes on the Greedy Method in the Design and Analysis of Algorithms course, covering various topics such as the Coin Change Problem, Knapsack Problem, Job Sequencing with Deadlines, and Minimum Cost Spanning Trees using Prim's and Kruskal's algorithms. It explains the greedy approach, which involves making locally optimal choices at each step to construct a solution, and provides examples and algorithms for each problem. The notes also discuss the efficiency and correctness of the algorithms presented.

Uploaded by

Naveenacse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views24 pages

m3 Notes

The document contains lecture notes on the Greedy Method in the Design and Analysis of Algorithms course, covering various topics such as the Coin Change Problem, Knapsack Problem, Job Sequencing with Deadlines, and Minimum Cost Spanning Trees using Prim's and Kruskal's algorithms. It explains the greedy approach, which involves making locally optimal choices at each step to construct a solution, and provides examples and algorithms for each problem. The notes also discuss the efficiency and correctness of the algorithms presented.

Uploaded by

Naveenacse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Vivekananda

N
Nehru Nagar Post, Puttur, D.K. 574203

Lecture Notes on
21CS42
Design and Analysis
lysis of
Algorithms

Modu
Module-3 : Greedy Method

Contents

1. Introduction to Greedy meethod 3. Single source shortes


test paths
1.1. General method, 3.1. Dijkstra's Algor
orithm
1.2. Coin Change Problem
em 4. Optimal Tree problelem:
1.3. Knapsack Problem 4.1. Huffman Treess and
a Codes
1.4. Job sequencing withh deadlines
d 5. Transform and Conq nquer Approach:
2. Minimum cost spanning trees:
tr 5.1. Heaps
2.1. Prim’s Algorithm, 5.2. Heap Sort
2.2. Kruskal’s Algorithm
Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

1. Introduction to Greedy
reedy method
1.1 General method
The greedy method is thee straight forward design technique applica
icable to variety of
applications.
The greedy approach sugges ests constructing a solution through a sequen
uence of steps, each
expanding a partially constru
tructed solution obtained so far, until a compl
plete solution to the
problem is reached. On each step
s the choice made must be:
• feasible, i.e., it has too satisfy
s the problem’s constraints
• locally optimal, i.e.,., iti has to be the best local choice among aall feasible choices
available on that step
• irrevocable, i.e., once nce made, it cannot be changed on subseq equent steps of the
algorithm
As a rule, greedy algorithmss are
a both intuitively appealing and simple. Giv
iven an optimization
problem, it is usually easy to figure out how to proceed in a greedy man anner, possibly after
considering a few small instan
tances of the problem. What is usually moree difficult
d is to prove
that a greedy algorithm yields
ds an optimal solution (when it does).

1.2. Coin Change Problem


Problem Statement: Given coins
coi of several denominations find out a wayy to give a customer
an amount with fewest numbe
ber of coins.
Example: if denominations are
a 1, 5, 10, 25 and 100 and the change required
r is 30, the
solutions are,
Amount : 30
Solutions : 3 x 100 ( 3 coins ), 6 x 5 ( 6 coins )
1 x 255 + 5 x 1 ( 6 coins ) 1 x 25 + 1 x 5 ( 2 coin
ins )
The last solution is the optima
mal one as it gives us change only with 2 coins
ns.

Prerpared by Harivinod N www.techjourney.in Page| 3.2


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Solution for coin change proroblem using greedy algorithm is very intui
tuitive and called as
cashier’s algorithm. Basic principle
pri is: At every iteration for search of
o a coin, take the
largest coin which can fit into
in remain amount to be changed at thatt particular
p time. At
the end you will have optimal
al solution.

1.3. Knapsack Problem

There are several greedy meth


thods to obtain the feasible solutions.
a) At each step fill the knaps
apsack with the object with largest profit - If I the object under
consideration does not fit, then
hen the fraction of it is included to fill the knap
apsack. This method
does not result optimal solutio
tion. As per this method the solution to the ab above problem is as
follows;
Select Item-1 with prof
rofit p1=25, here w1=18, x1=1. Remaining cap apacity = 20-18 = 2
Select Item-2 with prof
rofit p1=24, here w2=15, x1=2/15. Remaining ing capacity = 0
nd
Total profit earned = 28.2.
2 This results 2 solution in the exampple 4.1

Prerpared by Harivinod N www.techjourney.in Page| 3.3


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

b) At each step fill the objectt with


w smallest weight
rd
This results 3 solution
ion in the example 4.1
c) At each step include the objbject with maximum profit/weight ratio
th
This results 4 solution
ion in the example 4.1
This greedy approachh always
a results optimal solution.
Algorithm: The algori
orithm given below assumes that the objects are sorted in non-
increasing order of pro
rofit/weight ratio

Analysis:
Disregarding the time to initia
tially sort the object, each of the above strategie
gies use O(n) time,

0/1 Knapsack problem

Note: The greedy approachh to solve this problem does not necessarily
ily yield an optimal
solution

Prerpared by Harivinod N www.techjourney.in Page| 3.4


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

1.4. Job sequencing with deadlines


dead

The greedy strategy to solvee job


jo sequencing problem is, “At each time sele elect the job that that
satisfies the constraints and
nd gives maximum profit. i.e consider the jobs in the non
decreasing order of the pi’s”
w get the 3rd solution in the example 4.3. Itt can
By following this procedure,, we c be proved that,
this greedy strategy always results
res optimal solution

High lev
evel description of job sequencing algorithm

Prerpared by Harivinod N www.techjourney.in Page| 3.5


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Algorithm/Program 4.6: Greed


eedy algorithm for sequencing unit time jobs wi
with deadlines and
profits

Analysis:

Prerpared by Harivinod N www.techjourney.in Page| 3.6


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Fast Job Scheduling Algorith


ithm

Prerpared by Harivinod N www.techjourney.in Page| 3.7


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Algorithm: Fast Job Sheduli


uling

Analysis

Prerpared by Harivinod N www.techjourney.in Page| 3.8


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

2. Minimum cost spannin


anning trees
Definition: A spanning tree ee of a connected graph is its connected acycl
clic subgraph (i.e., a
tree) that contains all the ver
ertices of the graph. A minimum spanningg tree t of a weighted
connected graph is its spanni ning tree of the smallest weight, where thee w
weight of a tree is
defined as the sum of the weig
eights on all its edges. The minimum spannin
ning tree problem is
the problem of finding a minim
nimum spanning tree for a given weighted conn
nnected graph.

2.1. Prim’s Algorithm


Prim's algorithm constructs a minimum spanning tree through a sequence ce of expanding sub-
trees. The initial subtree in such
s a sequence consists of a single vertexx selected arbitrarily
from the set V of the graph'sh's vertices. On each iteration it expands thee current tree in the
greedy manner by simply atta ttaching to it the nearest vertex not in that tre
tree. (By the nearest
vertex, we mean a vertex not ot in the tree connected to a vertex in the tree
ee by an edge of the
smallest weight. Ties can be broken arbitrarily.) The algorithm stops after af all the graph's
vertices have been included in the tree being constructed. Since the algori rithm expands a tree
by exactly one vertex on eachach of its iterations, the total number of such
ch iterations is n - 1,
where n is the number of vertices
v in the graph. The tree generated by b the algorithm is
obtained as the set of edges.

Prerpared by Harivinod N www.techjourney.in Page| 3.9


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Correctness
Prim’s algorithm always yield
lds a minimum spanning tree.

Example: An example of pri rim’s algorithm is shown below.


The parenthesized labels off a vertex in the middle column
indicate the nearest tree vert
ertex and edge weight; selected
vertices and edges are shownn in
i bold.

Tree vertices Remaining vertices Illustratio


tion

Prerpared by Harivinod N www.techjourney.in Page| 3.10


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Analysis of Efficiency
The efficiency of Prim’s algor
orithm depends on the data structures chosenn for
f the graph itself
and for the priority queue of the set V − VT whose vertex priorities aree the
t distances to the
nearest tree vertices.
1. If a graph is represente
nted by its weight matrix and the priority queueue is implemented
as an unordered arra Θ 2). Indeed, on
ray, the algorithm’s running time will be in Θ(|V|
each of the |V| − 1itera
erations, the array implementing the priority qu
queue is traversed to
find and delete the minimum
m and then to update, if necessary,, the
th priorities of the
remaining vertices.
We can implement the priorit
rity queue as a min-heap. (A min-heap is a co complete binary tree
in which every element is less
ess than or equal to its children.) Deletion of the
th smallest element
from and insertion of a new element
el into a min-heap of size n are O(log n) operations.
2. If a graph is represente
nted by its adjacency lists and the priority que
ueue is implemented
as a min-heap, the run
unning time of the algorithm is in O(|E| log |V
V |).
This is because the algorithmm performs |V| − 1 deletions of the smallestt eelement and makes
|E| verifications and, possibly
bly, changes of an element’s priority in a mi min-heap of size not
exceeding |V|. Each of thesee operations,
o as noted earlier, is a O(log |V|) operation.
op Hence, the
running time of this implemenentation of Prim’s algorithm is in
(|V| − 1+ |E|) O (log |V |) = O(|E| log |V |) because, in a connected grap
aph, |V| − 1≤ |E|.

2.2. Kruskal’s Algorithm


Background
Kruskal's algorithm is another
er greedy algorithm for the minimum spanning ing tree problem that
also always yields an optimal
al solution. It is named Kruskal's algorithm, afte
fter Joseph Kruskal.
Kruskal's algorithm looks att a minimum spanning tree for a weighted con
connected graph G =
(V, E) as an acyclic sub graph
aph with |V | - 1 edges for which the sum of the
th edge weights is
the smallest. Consequently, y, the algorithm constructs a minimum spa spanning tree as an
expanding sequence of subb graphs, which are always acyclic but aare not necessarily
connected on the intermediate
te stages of the algorithm.

Working
The algorithm begins by sortirting the graph's edges in non decreasing orde rder of their weights.
Then, starting with the empty ty sub graph, it scans this sorted list adding the
th next edge on the
list to the current sub graph if such an inclusion does not create a cycle and
an simply skipping
the edge otherwise.

Prerpared by Harivinod N www.techjourney.in Page| 3.11


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

The fact that ET ,the set of edges


edg composing a minimum spanning tree of graph G actually a
tree in Prim's algorithm but generally
ge just an acyclic sub graph in Kruskal's
l's algorithm.

Kruskal’s algorithm is not simpler


sim because it has to check whether the aaddition of the next
edge to the edges already selec
lected would create a cycle.

We can consider the algorith rithm's operations as a progression through a series of forests
containing all the vertices off a given graph and some of its edges. The initia
itial forest consists of
|V| trivial trees, each comprisi
ising a single vertex of the graph. The finall forest
f consists of a
single tree, which is a min inimum spanning tree of the graph. On eeach iteration, the
algorithm takes the next edge ge (u, v) from the sorted list of the graph's edges,
ed finds the trees
containing the vertices u andd v, and, if these trees are not the same, uniteites them in a larger
tree by adding the edge (u, v).).

Analysis of Efficiency
The crucial check whether two
wo vertices belong to the same tree can be foun
und out using union-
find algorithms.
Efficiency of Kruskal’s algori
orithm is based on the time needed for sorting ng the edge weights
of a given graph. Hence, withith an efficient sorting algorithm, the time effic
fficiency of Kruskal's
algorithm will be in O (|E| log
og |E|).

Prerpared by Harivinod N www.techjourney.in Page| 3.12


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Illustration
An example of Kruskal’s alg lgorithm is shown below. The
selected edges are shown in bold.
bo

Prerpared by Harivinod N www.techjourney.in Page| 3.13


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

3. Single source shortest


rtest paths
Single-source shortest-pathss problem is defined as follows. For a givenen vertex called the
source in a weighted connect
ected graph, the problem is to find shortest paths
pa to all its other
vertices. The single-source shortest-paths
sh problem asks for a family off ppaths, each leading
from the source to a different
ent vertex in the graph, though some paths may,
ma of course, have
edges in common.
3.1. Dijkstra's Algorithm
Dijkstra's Algorithm is the
he best-known algorithm for the single-sou ource shortest-paths
problem. This algorithm is applicable
ap to undirected and directed graphs
hs with nonnegative
weights only.
Working - Dijkstra's algorithm
ithm finds the shortest paths to a graph's vertic
tices in order of their
distance from a given source.
e.
First, it finds the shor
ortest path from the source to a vertex neare
arest to it, then to a
second nearest, and so on.
In general, before its i ith iteration commences, the
algorithm has alreadyy identified the shortest paths to i-1
other vertices nearest
st to the source. These vertices, the
source, and the edgess of
o the shortest paths leading to them
from the source form rm a subtree Ti of the given graph
shown in the figure.
Since all the edge wei
eights are nonnegative, the next vertex nearesrest to the source can
be found among the vertices
ve adjacent to the vertices of Ti. The sett of
o vertices adjacent
to the vertices in Ti can
c be referred to as "fringe vertices"; theyy are the candidates
from which Dijkstra's
's algorithm
a selects the next vertex nearest to the
th source.
To identify the ith nea
earest vertex, the algorithm computes, for eve very fringe vertex u,
ce to the nearest tree vertex v (given by the we
the sum of the distance weight of the edge (v,
u)) and the length d., of
o the shortest path from the source to v (prev
reviously determined
by the algorithm) andd then
t selects the vertex with the smallest such
ch sum. The fact that
it suffices to compare
are the lengths of such special paths is the he central insight of
Dijkstra's algorithm.
To facilitate the algorit
rithm's operations, we label each vertex with tw
two labels.
o The numeric labebel d indicates the length of the shortest pathth from the source to
this vertex foundd by the algorithm so far; when a vertex is added
a to the tree, d
indicates the lengt
gth of the shortest path from the source to that
at vertex.
o The other label indicates
in the name of the next-to-last vertex oon such a path, i.e.,
the parent of thee vertex
v in the tree being constructed. (It cann be left unspecified
for the source s and
an vertices that are adjacent to none of the cur
current tree vertices.)

Prerpared by Harivinod N www.techjourney.in Page| 3.14


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

With such labeling,, finding


f the next nearest vertex u* becomes es a simple task of
finding a fringe vertex
ex with the smallest d value. Ties can be broken
en arbitrarily.
After we have identifie
ified a vertex u* to be added to the tree, we ne
need to perform two
operations:
o Move u* from m the fringe to the set of tree vertices.
o For each remaaining fringe vertex u that is connected to u* by an edge of
weight w (u*,, u)
u such that d u*+ w(u*, u) <d u, update the labels of u by u*
and du* + w(u*
u*, u), respectively.
o
Illustration: An example of Dijkstra's algorithm is shown
below. The next closest vertex
tex is shown in bold.

The shortest paths (identified


ed by following nonnumeric labels backward rd from a destination
vertex in the left column to the
th source) and their lengths (given by numeri
eric labels of the tree
vertices) are as follows:

Prerpared by Harivinod N www.techjourney.in Page| 3.15


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

The pseudocode of Dijkstra tra’s algorithm is given below. Note that at in the following
pseudocode, VT contains a giv iven source vertex and the fringe contains thee vvertices adjacent to
it after iteration 0 is completed
ted.

Analysis:
The time efficiency of Dijk ijkstra’s algorithm depends on the data structures
st used for
implementing the priority queue
qu and for representing an input graphh itself. For graphs
represented by their adjacency
cy lists and the priority queue implemented as a min-heap, it is in
O ( |E| log |V| )
Applications
Transportation plannin
ing and packet routing in communication netwtworks, including the
Internet
Finding shortest paths
hs in social networks, speech recognition, doc
ocument formatting,
robotics, compilers, and
an airline crew scheduling.

Prerpared by Harivinod N www.techjourney.in Page| 3.16


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

4. Optimal Tree problem


blem
Background
Suppose we have to encode a text
t that comprises characters from some n-ccharacter alphabet
by assigning to each of the text's th codeword.There
tex characters some sequence of bits called the
are two types of encoding: Fix
ixed-length encoding, Variable-length encodin
ing
Fixed-length encoding: This
is method assigns to each character a bit string
ng of the same length
m (m >= log2 n). This is exac
actly what the standard ASCII code does. Onne way of getting a
coding scheme that yields a shorter bit string on the average is basedd on the old idea of
assigning shorter code-words
rds to more frequent characters and longerr code-words
c to less
frequent characters.
Variable-length encoding: This Th method assigns code-words of differentt lengths
l to different
characters, introduces a proble
blem that fixed-length encoding does not have
ve. Namely, how can
we tell how many bits of an encoded text represent the first (or, more re generally, the ith)
character? To avoid this compplication, we can limit ourselves to prefix-free
ee (or simply prefix)
codes. In a prefix code, no codeword
co is a prefix of a codeword of anothe
her character. Hence,
with such an encoding, we can simply scan a bit string until we get the firs
irst group of bits that
is a codeword for some character,
cha replace these bits by this characte
cter, and repeat this
operation until the bit string's
's end
e is reached.
If we want to create a binaryary prefix code for some alphabet, it is natur tural to associate the
alphabet's characters with leav
eaves of a binary tree in which all the left edge
ges are labelled by 0
and all the right edges are labe
abelled by 1 (or vice versa). The codeword off a character can then
be obtained by recording thee labels on the simple path from the root too tthe character's leaf.
Since there is no simple path th to a leaf that continues to another leaf, noo ccodeword can be a
prefix of another codeword; hence,
he any such tree yields a prefix code.
Among the many trees thatt can c be constructed in this manner for a givgiven alphabet with
known frequencies of the character
ch occurrences, construction of such
ch a tree that would
assign shorter bit strings too high-frequency characters and longer ones es to low-frequency
characters can be done by thee following greedy algorithm, invented by Dav
avid Huffman.
4.1 Huffman Trees and Codes
Huffman's Algorithm
Step 1: Initialize n one-nodee trees
t and label them with the characters of th the alphabet. Record
cter in its tree's root to indicate the tree's weigh
the frequency of each characte ght. (More generally,
the weight of a tree will be equal
equ to the sum of the frequencies in the tree's 's leaves.)
Step 2: Repeat the followingg operation
o until a single tree is obtained. Find
nd two trees with the
smallest weight. Make them the t left and right subtree of a new tree and nd record the sum of
their weights in the root of the
he new tree as its weight.
ove algorithm is called a Huffman tree. It def
A tree constructed by the abov efines-in the manner
described-a Huffman code.

Prerpared by Harivinod N www.techjourney.in Page| 3.17


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Example: Consider the five-ssymbol alphabet {A, B, C, D, _} with the fol


following occurrence
frequencies in a text made upp of
o these symbols:

The Huffman tree construction


ion for the above problem is shown below:

The resulting codewords are as


a follows:

Hence, DAD is encoded as 011101,


01 and 10011011011101 is decoded as BAD_AD.
BA
With the occurrence frequen
encies given and the codeword lengths obta
btained, the average
number of bits per symboll in this code is
2 * 0.35 + 3 * 0.1+ 2 * 0.2 + 2 * 0.2 + 3 * 0.15 = 2.25.

Prerpared by Harivinod N www.techjourney.in Page| 3.18


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Had we used a fixed-length encoding


en for the same alphabet, we would ha
have to use at least 3
bits per each symbol. Thus, for he compression ratio
fo this example, Huffman’s code achieves the
(a standard measure of a comp
mpression algorithm’s effectiveness) of (3−2.225)/3*100%= 25%.
In other words, Huffman’s encoding
en of the above text will use 25% les
less memory than its
fixed-length encoding.

5. Transform and Conque


nquer Approach
5.1. Heaps
Heap is a partially ordered data
da structure that is especially suitable for imp
mplementing priority
queues. Priority queue is a multiset
m of items with an orderable characteris
ristic called an item’s
priority, with the following operations:
op
• finding an itemm with the highest (i.e., largest) priority
• deleting an item
tem with the highest priority
• adding a new item
it to the multiset
Notion of the Heap
Definition:
A heap can be defined as a binary
b tree with keys assigned to its nodes,
s, one key per node,
provided the following two conditions
co are met:
1. The shape property— —the binary tree is essentially complete (or
or simply complete),
i.e., all its levels aree full
f except possibly the last level, where on
only some rightmost
leaves may be missing ng.
2. The parental dominan ance or heap property—the key in each nod ode is greater than or
equal to the keys in its children.
Illustration:
The illustration of the definiti
ition of heap is shown bellow: only the left moost tree is heap. The
second one is not a heap, bec ecause the tree’s shape property is violated. The
Th left child of last
subtree cannot be empty. And nd the third one is not a heap, because the pparental dominance
fails for the node with key 5.

Properties of Heap
o essentially complete binary tree with n nodes.
1. There exists exactly one n Its height is
equal to
2. The root of a heap alw
lways contains its largest element.

Prerpared by Harivinod N www.techjourney.in Page| 3.19


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

3. A node of a heap consinsidered with all its descendants is also a heap.


4. A heap can be implem emented as an array by recording its element nts in the top down,
left-to-right fashion.. ItI is convenient to store the heap’s elemeents in positions 1
through n of such an array, leaving H[0] either unused or puttin tting there a sentinel
whose value is greaterer than every element in the heap. In such a representation,
rep
no keys will be in the first n/2 . positionss of the array, while
a. the parental node
w occupy the last n/2 positions;
the leaf keys will
b. the children of a key in the array’s parental position i (1≤≤ i ≤ /2 ) will be in
positions 2i and 2i + 1, and, correspondingly, the parent of a key in position i
ill be in position /2 .
(2 ≤ i ≤ n) will

Heap and its array representation

Thus, we could also define a heap


h as an array H[1..n] in which every elem
ement in position i in
the first half of the array is greater
gr than or equal to the elements in posit
sitions 2i and 2i + 1,
i.e.,
H[i] ≥ max {H [2i], H [22i + 1]} for i = 1. . . /2

Constructions of Heap - Ther


here are two principal alternatives for construct
cting Heap.
1) Bottom-up heap constructio
tion 2) Top-down heap construction

Bottom-up heap construction


ion:
The bottom-up heap construct
uction algorithm is illustrated bellow. It initial
ializes the essentially
complete binary tree with n nodes
no by placing keys in the order given and then
th “heapifies” the
tree as follows.
• Starting with the last
ast parental node, the algorithm checks whhether the parental
dominance holds forr the
th key in this node. If it does not, the algori
orithm exchanges the
node’s key K with theth larger key of its children and checks whether
wh the parental
dominance holds forr K in its new position. This process continues ues until the parental
dominance for K is satisfied.
sat (Eventually, it has to because it hold
lds automatically for
any key in a leaf.)
• After completing thee “heapification” of the subtree rooted at the th current parental
node, the algorithm proceeds
pro to do the same for the node’s immediadiate predecessor.
• The algorithm stops after
af this is done for the root of the tree.

Prerpared by Harivinod N www.techjourney.in Page| 3.20


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Illustration
Bottom-up construction of a heap
h for the list 2, 9, 7, 6, 5, 8. The double headed
he arrows show
key comparisons verifying the
he parental dominance.

Analysis of efficiency - botto


tom up heap construction algorithm:
Assume, for simplicity, that n = 2k − 1 so that a heap’s tree is full, i.e.,., tthe largest possible
eac level. Let h be the height of the tree.
number of nodes occurs on each
rty of heaps in the list at the beginning of thee ssection, h=
According to the first property
or just 1 = k − 1 forf the specific values of n we are consideringing.
Each key on level i of the tree
tre will travel to the leaf level h in the wors
orst case of the heap
construction algorithm. Sincece moving to the next level down requires twoo comparisons—one
to find the larger child and the other to determine whether the exchange is required—the total
number of key comparisons involving
in a key on level i will be 2(h − i).
Therefore, the total number of key comparisons in the worst case will be

Prerpared by Harivinod N www.techjourney.in Page| 3.21


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

where the validity of the last


st equality
e can be proved either by using the closed-form
cl formula

for the sum or by mathematical induction on h.


lgorithm, a heap of size n can be constructedd with
Thus, with this bottom-up alg w fewer than 2n
comparisons.

Top-down heap construction


on algorithm:
It constructs a heap by success
essive insertions of a new key into a previously
ly constructed heap.
ode with key K in it after the last leaf of the exi
1. First, attach a new nod xisting heap.
2. Then shift K up to itss appropriate
a place in the new heap as follows. s.
a. Compare K with its ts parent’s
p equal to K, stop (the
key: if the latter is greater than or eq
structure is a heap);); otherwise, swap these two keys and compa pare K with its new
parent.
b. This swapping contin tinues until K is not greater than its last parent
nt or it reaches root.
Obviously, this insertion opeperation cannot require more key comparison sons than the heap’s
height. Since the height of a heap
h with n nodes is about log2 n, the time effi
fficiency of insertion
is in O (log n).
Illustration of inserting a new ne key: Inserting a new key (10) into the
heap is constructed bellow. The T new key is shifted up via a swap with
its parents until it is not larger
er than its parents (or is in the root).

ap: Deleting the root’s key from a heap can


Delete an item from a heap an be done with the
following algorithm:
Maximum Key Deletion from om a heap
ke with the last key K of the heap.
1. Exchange the root’s key
2. Decrease the heap’s size
siz by 1.
er tree by sifting K down the tree exactly in the same way we did
3. “Heapify” the smaller
it in the bottom-upp heap construction algorithm. That is, vverify the parental
dominance for K: if it holds,
h we are done; if not, swap K with the la
larger of its children
and repeat this operatio holds for K in its new
tion until the parental dominance condition hol
position.

Prerpared by Harivinod N www.techjourney.in Page| 3.22


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Illustration

The efficiency of deletion is i determined by the number of key compmparisons needed to


“heapify” the tree after the swap
sw has been made and the size of the tree
ee is decreased by 1.
Since this cannot require moremo key comparisons than twice the heap’p’s height, the time
efficiency of deletion is in O (log
( n) as well.

5.2. Heap Sort


Heapsort - an interesting sort
rting algorithm is discovered by J. W. J. Willi
lliams. This is a two-
stage algorithm that works ass follows.
f
tion): Construct a heap for a given array.
Stage 1 (heap constructio
Stage 2 (maximum deletietions): Apply the root-deletion operation n−11 times
t to the
remaining heap.
As a result, the array element
nts are eliminated in decreasing order. But sin
since under the array
implementation of heaps an element
e being deleted is placed last, the resu
esulting array will be
exactly the original array sorte
rted in increasing order.

Heap sort is traced on a specif


cific input is shown below:

Prerpared by Harivinod N www.techjourney.in Page| 3.23


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Analysis of efficiency:
Since we already know thatt the i in O(n), we have
th heap construction stage of the algorithm is
to investigate just the timee efficiency of the second stage. For the number of key
comparisons, C(n), needed for eliminating the root keys from the heaps of
o diminishing sizes
from n to 2, we get the followi
wing inequality:

This means that C(n) ∈ O(n log


lo n) for the second stage of heapsort.

For both stages, we get O(n)) + O(n log n) = O(n log n).
A more detailed analysis show
ows that the time efficiency of heapsort is, in fact, in Θ(n log n)
in both the worst and average
age cases. Thus, heapsort’s time efficiency fall
alls in the same class
as that of mergesort.
Unlike the latter, heapsort is in-place, i.e., it does not require any extr
xtra storage. Timing
experiments on random filess show
s that heapsort runs more slowly than quicksort
qu but can be
competitive with mergesort.

*****

Prerpared by Harivinod N www.techjourney.in Page| 3.24

You might also like