Ada Notes Mod 3
Ada Notes Mod 3
Properties of Heap
o essentially complete binary tree with n nodes.
1. There exists exactly one n Its height is
equal to
2. The root of a heap alw
lways contains its largest element.
Illustration
Bottom-up construction of a heap
h for the list 2, 9, 7, 6, 5, 8. The double headed
he arrows show
key comparisons verifying the
he parental dominance.
Illustration
Analysis of efficiency:
Since we already know thatt the i in O(n), we have
th heap construction stage of the algorithm is
to investigate just the timee efficiency of the second stage. For the number of key
comparisons, C(n), needed for eliminating the root keys from the heaps of
o diminishing sizes
from n to 2, we get the followi
wing inequality:
For both stages, we get O(n)) + O(n log n) = O(n log n).
A more detailed analysis show
ows that the time efficiency of heapsort is, in fact, in Θ(n log n)
in both the worst and average
age cases. Thus, heapsort’s time efficiency fall
alls in the same class
as that of mergesort.
Unlike the latter, heapsort is in-place, i.e., it does not require any extr
xtra storage. Timing
experiments on random filess show
s that heapsort runs more slowly than quicksort
qu but can be
competitive with mergesort.
*****
Chapter-IV
Space-Time Tradeoffs
4.8. Introduction
• Space and time trade-offs in algorithm design are a well-known issue for
both theoreticians and practitioners of computing.
• Consider, as an example The problem of computing values of a function at
many points in its domain. If it is time that is at a premium, we can
precompute the function’s values and store them in a table.
• This is exactly what human computers had to do before the advent of
electronic computers, in the process burdening libraries with thick volumes
of mathematical tables.
• Though such tables have lost much of their appeal with the widespread use
of electronic computers, the underlying idea has proven to be quite useful in
the development of several important algorithms for other problems.
for i ← 0 to n − 1 do Count[i] ← 0
for i ← 0 to n − 2 do
for j ← i + 1 to n − 1 do
Count[j ] ← Count[j ] + 1
return S
for i ← n − 1 downto 0 do
j ← A[i] − l
S[D[j ] − 1] ← A[i]
D[j ] ← D[j ] − 1
return S
Horspool’s Algorithm
Case 2: If there are occurrences of character c in the pattern but it is not the last
one there—e.g., c is letter B in our example—the shift should align the rightmost
Case 3: If c happens to be the last character in the pattern but there are no c’s
is similar to that of Case 1 and the pattern should be shifted by the entire pattern’s
length m:
Case 4: Finally, if c happens to be the last character in the pattern and there
are other c’s among its first m − 1 characters—e.g., c is letter R in our example—
the situation is similar to that of Case 2 and the rightmost occurrence of c among
the first m − 1 characters in the pattern should be aligned with the text’s c:
Horspool’s algorithm
Step 1: For a given pattern of length m and the alphabet used in both the pattern
and text, construct the shift table as described above.
Step 3: Repeat the following until either a matching substring is found or the
pattern reaches beyond the last character of the text.
Starting with the last character in the pattern, compare the corresponding
characters in the pattern and text until either all m characters are matched
//Output: The index of the left end of the rst matching substring
while i ≤ n − 1 do
while k ≤ m − 1 and P [m − 1 − k] = T [i − k] do
k←k+1
if k = m
return i − m + 1
return −1