0% found this document useful (0 votes)
15 views14 pages

Ada Notes Mod 3

The document covers the design and analysis of algorithms, focusing on greedy methods and heap data structures. It explains Huffman encoding for data compression, the properties and construction of heaps, and the efficiency of heap operations. Additionally, it discusses sorting algorithms, particularly heapsort and counting sort, emphasizing space-time trade-offs in algorithm design.

Uploaded by

rithikakhd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views14 pages

Ada Notes Mod 3

The document covers the design and analysis of algorithms, focusing on greedy methods and heap data structures. It explains Huffman encoding for data compression, the properties and construction of heaps, and the efficiency of heap operations. Additionally, it discusses sorting algorithms, particularly heapsort and counting sort, emphasizing space-time trade-offs in algorithm design.

Uploaded by

rithikakhd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Lecture Notes | 10CS43

S43 – Design & Analysis of Algorithms | Module 3: Greedy


eedy Method
M

Had we used a fixed-length encoding


en for the same alphabet, we would ha
have to use at least 3
bits per each symbol. Thus, for he compression ratio
fo this example, Huffman’s code achieves the
(a standard measure of a comp
mpression algorithm’s effectiveness) of (3−2.225)/3*100%= 25%.
In other words, Huffman’s encoding
en of the above text will use 25% les
less memory than its
fixed-length encoding.

5. Transform and Conque


nquer Approach
5.1. Heaps
Heap is a partially ordered data
da structure that is especially suitable for imp
mplementing priority
queues. Priority queue is a multiset
m of items with an orderable characteris
ristic called an item’s
priority, with the following operations:
op
• finding an itemm with the highest (i.e., largest) priority
• deleting an item
tem with the highest priority
• adding a new item
it to the multiset
Notion of the Heap
Definition:
A heap can be defined as a binary
b tree with keys assigned to its nodes,
s, one key per node,
provided the following two conditions
co are met:
1. The shape property— —the binary tree is essentially complete (or
or simply complete),
i.e., all its levels aree full
f except possibly the last level, where on
only some rightmost
leaves may be missing ng.
2. The parental dominan ance or heap property—the key in each nod ode is greater than or
equal to the keys in its children.
Illustration:
The illustration of the definiti
ition of heap is shown bellow: only the left moost tree is heap. The
second one is not a heap, bec ecause the tree’s shape property is violated. The
Th left child of last
subtree cannot be empty. And nd the third one is not a heap, because the pparental dominance
fails for the node with key 5.

Properties of Heap
o essentially complete binary tree with n nodes.
1. There exists exactly one n Its height is
equal to
2. The root of a heap alw
lways contains its largest element.

Prerpared by Harivinod N www.techjourney.in Page| 3.19


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

3. A node of a heap consinsidered with all its descendants is also a heap.


4. A heap can be implem emented as an array by recording its element nts in the top down,
left-to-right fashion.. ItI is convenient to store the heap’s elemeents in positions 1
through n of such an array, leaving H[0] either unused or puttin tting there a sentinel
whose value is greaterer than every element in the heap. In such a representation,
rep
no keys will be in the first n/2 . positionss of the array, while
a. the parental node
w occupy the last n/2 positions;
the leaf keys will
b. the children of a key in the array’s parental position i (1≤≤ i ≤ /2 ) will be in
positions 2i and 2i + 1, and, correspondingly, the parent of a key in position i
ill be in position /2 .
(2 ≤ i ≤ n) will

Heap and its array representation

Thus, we could also define a heap


h as an array H[1..n] in which every elem
ement in position i in
the first half of the array is greater
gr than or equal to the elements in posit
sitions 2i and 2i + 1,
i.e.,
H[i] ≥ max {H [2i], H [22i + 1]} for i = 1. . . /2

Constructions of Heap - Ther


here are two principal alternatives for construct
cting Heap.
1) Bottom-up heap constructio
tion 2) Top-down heap construction

Bottom-up heap construction


ion:
The bottom-up heap construct
uction algorithm is illustrated bellow. It initial
ializes the essentially
complete binary tree with n nodes
no by placing keys in the order given and then
th “heapifies” the
tree as follows.
• Starting with the last
ast parental node, the algorithm checks whhether the parental
dominance holds forr the
th key in this node. If it does not, the algori
orithm exchanges the
node’s key K with theth larger key of its children and checks whether
wh the parental
dominance holds forr K in its new position. This process continues ues until the parental
dominance for K is satisfied.
sat (Eventually, it has to because it hold
lds automatically for
any key in a leaf.)
• After completing thee “heapification” of the subtree rooted at the th current parental
node, the algorithm proceeds
pro to do the same for the node’s immediadiate predecessor.
• The algorithm stops after
af this is done for the root of the tree.

Prerpared by Harivinod N www.techjourney.in Page| 3.20


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Illustration
Bottom-up construction of a heap
h for the list 2, 9, 7, 6, 5, 8. The double headed
he arrows show
key comparisons verifying the
he parental dominance.

Analysis of efficiency - botto


tom up heap construction algorithm:
Assume, for simplicity, that n = 2k − 1 so that a heap’s tree is full, i.e.,., tthe largest possible
eac level. Let h be the height of the tree.
number of nodes occurs on each
rty of heaps in the list at the beginning of thee ssection, h=
According to the first property
or just 1 = k − 1 forf the specific values of n we are consideringing.
Each key on level i of the tree
tre will travel to the leaf level h in the wors
orst case of the heap
construction algorithm. Sincece moving to the next level down requires twoo comparisons—one
to find the larger child and the other to determine whether the exchange is required—the total
number of key comparisons involving
in a key on level i will be 2(h − i).
Therefore, the total number of key comparisons in the worst case will be

Prerpared by Harivinod N www.techjourney.in Page| 3.21


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

where the validity of the last


st equality
e can be proved either by using the closed-form
cl formula

for the sum or by mathematical induction on h.


lgorithm, a heap of size n can be constructedd with
Thus, with this bottom-up alg w fewer than 2n
comparisons.

Top-down heap construction


on algorithm:
It constructs a heap by success
essive insertions of a new key into a previously
ly constructed heap.
ode with key K in it after the last leaf of the exi
1. First, attach a new nod xisting heap.
2. Then shift K up to itss appropriate
a place in the new heap as follows. s.
a. Compare K with its ts parent’s
p equal to K, stop (the
key: if the latter is greater than or eq
structure is a heap);); otherwise, swap these two keys and compa pare K with its new
parent.
b. This swapping contin tinues until K is not greater than its last parent
nt or it reaches root.
Obviously, this insertion opeperation cannot require more key comparison sons than the heap’s
height. Since the height of a heap
h with n nodes is about log2 n, the time effi
fficiency of insertion
is in O (log n).
Illustration of inserting a new ne key: Inserting a new key (10) into the
heap is constructed bellow. The T new key is shifted up via a swap with
its parents until it is not larger
er than its parents (or is in the root).

ap: Deleting the root’s key from a heap can


Delete an item from a heap an be done with the
following algorithm:
Maximum Key Deletion from om a heap
ke with the last key K of the heap.
1. Exchange the root’s key
2. Decrease the heap’s size
siz by 1.
er tree by sifting K down the tree exactly in the same way we did
3. “Heapify” the smaller
it in the bottom-upp heap construction algorithm. That is, vverify the parental
dominance for K: if it holds,
h we are done; if not, swap K with the la
larger of its children
and repeat this operatio holds for K in its new
tion until the parental dominance condition hol
position.

Prerpared by Harivinod N www.techjourney.in Page| 3.22


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Illustration

The efficiency of deletion is i determined by the number of key compmparisons needed to


“heapify” the tree after the swap
sw has been made and the size of the tree
ee is decreased by 1.
Since this cannot require moremo key comparisons than twice the heap’p’s height, the time
efficiency of deletion is in O (log
( n) as well.

5.2. Heap Sort


Heapsort - an interesting sort
rting algorithm is discovered by J. W. J. Willi
lliams. This is a two-
stage algorithm that works ass follows.
f
tion): Construct a heap for a given array.
Stage 1 (heap constructio
Stage 2 (maximum deletietions): Apply the root-deletion operation n−11 times
t to the
remaining heap.
As a result, the array element
nts are eliminated in decreasing order. But sin
since under the array
implementation of heaps an element
e being deleted is placed last, the resu
esulting array will be
exactly the original array sorte
rted in increasing order.

Heap sort is traced on a specif


cific input is shown below:

Prerpared by Harivinod N www.techjourney.in Page| 3.23


Lecture Notes | 10CS43
S43 – Design & Analysis of Algorithms | Module 3: Greedy
eedy Method
M

Analysis of efficiency:
Since we already know thatt the i in O(n), we have
th heap construction stage of the algorithm is
to investigate just the timee efficiency of the second stage. For the number of key
comparisons, C(n), needed for eliminating the root keys from the heaps of
o diminishing sizes
from n to 2, we get the followi
wing inequality:

This means that C(n) ∈ O(n log


lo n) for the second stage of heapsort.

For both stages, we get O(n)) + O(n log n) = O(n log n).
A more detailed analysis show
ows that the time efficiency of heapsort is, in fact, in Θ(n log n)
in both the worst and average
age cases. Thus, heapsort’s time efficiency fall
alls in the same class
as that of mergesort.
Unlike the latter, heapsort is in-place, i.e., it does not require any extr
xtra storage. Timing
experiments on random filess show
s that heapsort runs more slowly than quicksort
qu but can be
competitive with mergesort.

*****

Prerpared by Harivinod N www.techjourney.in Page| 3.24


21CS42 | Design and Analysis of Algorithm | SEARCH CREATORS.

Chapter-IV

Space-Time Tradeoffs

4.8. Introduction

• Space and time trade-offs in algorithm design are a well-known issue for
both theoreticians and practitioners of computing.
• Consider, as an example The problem of computing values of a function at
many points in its domain. If it is time that is at a premium, we can
precompute the function’s values and store them in a table.
• This is exactly what human computers had to do before the advent of
electronic computers, in the process burdening libraries with thick volumes
of mathematical tables.
• Though such tables have lost much of their appeal with the widespread use
of electronic computers, the underlying idea has proven to be quite useful in
the development of several important algorithms for other problems.

4.9. Sorting By Counting

• One rather obvious idea is to count, for each element of a list to be


sorted, the total number of elements smaller than this element and record
the results in a table.
• These numbers will indicate the positions of the elements in the sorted
list: e.g., if the count is 10 for some element, it should be in the 11th
position (with index 10, if we start counting with 0) in the sorted array.
• Thus, we will be able to sort the list by simply copying its elements to
their appropriate positions in a new, sorted list. This algorithm is called
comparison counting sort
Search Creators... Page 31
21CS42 | Design and Analysis of Algorithm | SEARCH CREATORS.

Fig: Example of sorting by comparison counting

ALGORITHM ComparisonCountingSort (A [0..n − 1])

//Sorts an array by comparison counting

//Input: An array A[0..n − 1] of orderable elements

//Output: Array S[0..n − 1] of A’s elements sorted in nondecreasing order

for i ← 0 to n − 1 do Count[i] ← 0

for i ← 0 to n − 2 do

for j ← i + 1 to n − 1 do

if A[i] < A[j ]

Count[j ] ← Count[j ] + 1

else Count[i] ← Count[i] + 1

for i ← 0 to n − 1 do S[Count[i]] ← A[i]

return S

Search Creators... Page 32


21CS42 | Design and Analysis of Algorithm | SEARCH CREATORS.

EXAMPLE Consider sorting the array

Search Creators... Page 33


21CS42 | Design and Analysis of Algorithm | SEARCH CREATORS.

ALGORITHM DistributionCountingSort(A[0..n − 1], l, u)

//Sorts an array of integers from a limited range by distribution counting

//Input: An array A[0..n − 1] of integers between l and u (l ≤ u)

//Output: Array S[0..n − 1] of A’s elements sorted in nondecreasing order

for j ← 0 to u − l do D[j ] ← 0 //initialize frequencies

for i ← 0 to n − 1 do D[A[i] − l] ← D[A[i] − l] + 1 //compute frequencies

for j ← 1 to u − l do D[j ] ← D[j − 1] + D[j ] //reuse for distribution

for i ← n − 1 downto 0 do

j ← A[i] − l

S[D[j ] − 1] ← A[i]

D[j ] ← D[j ] − 1

return S

Search Creators... Page 34


21CS42 | Design and Analysis of Algorithm | SEARCH CREATORS.

4.10. Input Enhancement in String Matching

• the problem of string matching requires finding an occurrence of a given


string of m characters called the pattern in a longer string of n characters
called the text.
• it simply matches corresponding pairs of characters in the pattern and the
text left to right and, if a mismatch occurs, shifts the pattern one position to
the right for the next trial.
• Since the maximum number of such trials is n − m + 1 and, in the worst
case, m comparisons need to be made on each of them, the worst-case
efficiency of the brute-force algorithm is in the O(nm) class.
• Several faster algorithms have been discovered.
• Most of them exploit the input-enhancement idea: preprocess the pattern to
get some information about it, store this information in a table, and then use
this information during an actual search for the pattern in a given text.

Horspool’s Algorithm

Consider, as an example, searching for the pattern BARBER in some text:

In general, the following four possibilities can occur.

Search Creators... Page 35


21CS42 | Design and Analysis of Algorithm | SEARCH CREATORS.

Case 1: If there are no c’s in the pattern—e.g., c is letter S in our example—

we can safely shift the pattern by its entire length

Case 2: If there are occurrences of character c in the pattern but it is not the last

one there—e.g., c is letter B in our example—the shift should align the rightmost

occurrence of c in the pattern with the c in the text:

Case 3: If c happens to be the last character in the pattern but there are no c’s

among its other m − 1 characters—e.g., c is letter R in our example—the situation

is similar to that of Case 1 and the pattern should be shifted by the entire pattern’s

length m:

Search Creators... Page 36


21CS42 | Design and Analysis of Algorithm | SEARCH CREATORS.

Case 4: Finally, if c happens to be the last character in the pattern and there

are other c’s among its first m − 1 characters—e.g., c is letter R in our example—

the situation is similar to that of Case 2 and the rightmost occurrence of c among

the first m − 1 characters in the pattern should be aligned with the text’s c:

Horspool’s algorithm

Step 1: For a given pattern of length m and the alphabet used in both the pattern
and text, construct the shift table as described above.

Step 2: Align the pattern against the beginning of the text.

Step 3: Repeat the following until either a matching substring is found or the
pattern reaches beyond the last character of the text.

Starting with the last character in the pattern, compare the corresponding
characters in the pattern and text until either all m characters are matched

Search Creators... Page 37


21CS42 | Design and Analysis of Algorithm | SEARCH CREATORS.

ALGORITHM HorspoolMatching(P [0..m − 1], T [0..n − 1])

//Implements Horspool’s algorithm for string matching

//Input: Pattern P [0..m − 1] and text T [0..n − 1]

//Output: The index of the left end of the rst matching substring

// or −1 if there are no matches

ShiftTable(P [0..m − 1]) //generate Table of shifts

i ← m − 1 //position of the pattern’s right end

while i ≤ n − 1 do

k ← 0 //number of matched characters

while k ≤ m − 1 and P [m − 1 − k] = T [i − k] do

k←k+1

if k = m

return i − m + 1

else i ← i + Table[T [i]]

return −1

Search Creators... Page 38

You might also like