0% found this document useful (0 votes)
7 views

CSC323-Sp2017-Mod1-Algorithm-Efficiency

Uploaded by

Camilalp
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

CSC323-Sp2017-Mod1-Algorithm-Efficiency

Uploaded by

Camilalp
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Module 1:

Analyzing the Efficiency of


Algorithms

Dr. Natarajan Meghanathan


Professor of Computer Science
Jackson State University
Jackson, MS 39217
E-mail: [email protected]
What is an Algorithm?
• An algorithm is a sequence of unambiguous instructions for solving a
problem, i.e., for obtaining a required output for any legitimate input in
a finite amount of time. Problem

Algorithm

Input Computer Output

• Important Points about Algorithms


– The non-ambiguity requirement for each step of an algorithm
cannot be compromised
– The range of inputs for which an algorithm works has to be
specified carefully.
– The same algorithm can be represented in several different ways
– There may exist several algorithms for solving the same problem.
• Can be based on very different ideas and can solve the problem with
dramatically different speeds
The Analysis Framework
• Time efficiency (time complexity): indicates how fast an algorithm
runs
• Space efficiency (space complexity): refers to the amount of
memory units required by the algorithm in addition to the space
needed for its input and output
• Algorithms that have non-appreciable space complexity are said to
be in-place.
• The time efficiency of an algorithm is typically as a function of the
input size (one or more input parameters)
– Algorithms that input a collection of values:
• The time efficiency of sorting a list of integers is represented in terms of the
number of integers (n) in the list
• For matrix multiplication, the input size is typically referred as n*n.
• For graphs, the input size is the set of Vertices (V) and edges (E).
– Algorithms that input only one value:
• The time efficiency depends on the magnitude of the integer. In such cases,
the algorithm efficiency is represented as the number of bits 1+ log 2 n 
needed to represent the integer n
Units for Measuring Running Time
• The running time of an algorithm is to be measured with a unit that is
independent of the extraneous factors like the processor speed,
quality of implementation, compiler and etc.
– At the same time, it is not practical as well as not needed to count the
number of times, each operation of an algorithm is performed.
• Basic Operation: The operation contributing the most to the total
running time of an algorithm.
– It is typically the most time consuming operation in the algorithm’s
innermost loop.
• Examples: Key comparison operation; arithmetic operation (division being
the most time-consuming, followed by multiplication)
– We will count the number of times the algorithm’s basic operation is
executed on inputs of size n.
Examples for
Input Size and Basic Operations
Problem Input size measure Basic operation

Searching for key in a Number of list’s items,


Key comparison
list of n items i.e. n

Multiplication of two Matrix dimensions or Multiplication of two


matrices total number of elements numbers

Checking primality of n’size = number of digits


Division
a given integer n (in binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
Orders of Growth
• We are more interested in the order of growth on the number of times
the basic operation is executed on the input size of an algorithm.
• Because, for smaller inputs, it is difficult to distinguish efficient
algorithms vs. inefficient ones.
• For example, if the number of basic operations of two algorithms to
solve a particular problem are n and n2 respectively, then
– if n = 3, then we may say there is not much difference between requiring
3 basic operations and 9 basic operations and the two algorithms have
about the same running time.
– On the other hand, if n = 10000, then it does makes a difference whether
the number of times the basic operation is executed is n or n2.
Exponential-growth
functions

Source: Table 2.1


From Levitin, 3rd ed.
Best-case, Average-case, Worst-case
• For many algorithms, the actual running time may not only
depend on the input size; but, also on the specifics of a
particular input.
– For example, sorting algorithms (like insertion sort) may run
faster on an input sequence that is almost-sorted rather than on a
randomly generated input sequence.

• Worst case: Cworst(n) – maximum number of times the basic


operation is executed over inputs of size n
• Best case: Cbest(n) – minimum # times over inputs of size n
• Average case: Cavg(n) – “average” over inputs of size n
– Number of times the basic operation will be executed on typical
input
– NOT the average of worst and best case
– Expected number of basic operations considered as a random
variable under some assumption about the probability distribution
of all possible inputs
Example for Worst and Best-Case
Analysis: Sequential Search

/* Assume the second condition will not


be executed if the first condition evaluates to
false */

• Worst-Case: Cworst(n) = n
• Best-Case: Cbest(n) = 1
Probability-based Average-Case
Analysis of Sequential Search
• If p is the probability of finding an element in the list, then (1-p) is the
probability of not finding an element in the list.
• Also, on an n-element list, the probability of finding the search key as
the ith element in the list is p/n for all values of 1 ≤i ≤ n

• If p = 1 (the element that we will search for always exists in the list),
then Cavg(n) = (n+1)/2. That is, on average, we visit half of the entries
in the list to search for any element in the list.
• If p = 0 (all the time, the element that we will search never exists),
then Cavg(n) = n. That is, we visit all the elements in the list.
YouTube Link: https://www.youtube.com/watch?v=8V-bHrPykrE
Asymptotic Notations: Intro
2n ≤ 0.05 n2
for n ≥ 40
2n = O(n2)

0.05n2 ≥ 2n
for n ≥ 40
0.05n2 = Ω(n)
Asymptotic Notations: Intro
2n ≤ 5n
5n for n ≥ 1
2n = O(n)

2n ≥ n
for n ≥ 1
2n 2n = Ω(n)
n As 2n = O(n)
and 2n = Ω(n),
we say
2n = Θ(n)

n
Asymptotic Notations: Formal Intro

t(n) = O(g(n)) t(n) = Ω(g(n))


t(n) ≤ c*g(n) for all n ≥ n0 t(n) ≥ c*g(n) for all n ≥ n0
c is a positive constant (> 0) c is a positive constant (> 0)
and n0 is a non-negative integer and n0 is a non-negative integer

Note: If t(n) = O(g(n))  g(n) = Ω(t(n)); also, if t(n) = Ω(g(n))  g(n) = O(t(n))
Asymptotic Notations: Formal Intro

t(n) = Θ(g(n))
c2*g(n) ≤ t(n) ≤ c1*g(n) for all n ≥ n0
c1 and c2 are positive constants (> 0)
and n0 is a non-negative integer
Asymptotic Notations: Examples
• Let t(n) and g(n) be any non-negative functions defined on a set of all
real numbers.
• We say t(n) = O(g(n)) for all functions t(n) that have a lower or the
same order of growth as g(n), within a constant multiple as n  ∞.
– Examples:

• We say t(n) = Ω(g(n)) for all functions t(n) that have a higher or the
same order of growth as g(n), within a constant multiple as n  ∞.
– Examples:

• We say t(n) = Θ(g(n)) for all functions t(n) that have the same order of
growth as g(n), within a constant multiple as n  ∞.
– Examples: an2 + bn + c = Θ(n2);
n2 + logn = Θ(n2)
Useful Property of Asymptotic
Notations
• If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)) , then
t1(n) + t2(n) ∈ O(max{g1(n), g2(n)})

• If t1(n) ∈ Θ(g1(n)) and t2(n) ∈ Θ(g2(n)) , then


t1(n) + t2(n) ∈ Θ(max{g1(n), g2(n)})

• The property can be applied for the Ω notation with a


slight change: Replace the Max with the Min.
• If t1(n) ∈ Ω(g1(n)) and t2(n) ∈ Ω(g2(n)) , then
t1(n) + t2(n) ∈ Ω(min{g1(n), g2(n)})
Using Limits to Compare Order of Growth

The first case means t(n) = O(g(n)


if the second case is true, then t(n) = Θ(g(n))
The last case means t(n) = Ω(g(n))

L’Hopital’s Rule

Note: t’(n) and g’(n) are first-order derivatives of t(n) and g(n)

Stirling’s Formula
Example 1: To Determine the
Order of Growth
Example 1: To Determine the
Order of Growth (continued…)
Example 2: To Determine the
Order of Growth
Example 2: To Determine the
Order of Growth (continued…)
Examples to Compare the Order of Growth

Example 3: Compare the order of growth of log2n and logn2.


Some More Examples: Order of Growth

• a) (n2+1)10 : Informally, = (n2+1)10 ≈ n20.


Formally,

b)

c)
d)
Some More Examples: Order of Growth

The listing of the functions in the increasing


Order of growth is as follows:

log2n 2logn 6 logn 6 18


Lim ----------- = Lim ---------------- = ----------- = -------------- = Lim -------- = 0
n  ∞ n1/3 n  ∞ n*(1/3)n(-2/3) n(1/3) n*(1/3)n(-2/3) n  ∞ n(1/3)
Hence, log2n = O(n1/3)
Time Efficiency of Non-recursive
Algorithms: General Plan for Analysis
• Decide on parameter n indicating input size
• Identify algorithm’s basic operation
• Determine worst, average, and best cases for input of size
n, if the number of times the basic operation gets executed
varies with specific instances (inputs)
• Set up a sum for the number of times the basic operation is
executed
• Simplify the sum using standard formulas and rules
Useful Summation Formulas and Rules
Σl≤i≤u1 = 1+1+…+1 = u - l + 1
In particular, Σl≤i≤n1 = n - 1 + 1 = n ∈ Θ(n)

Σ1≤i≤n i = 1+2+…+n = n(n+1)/2 ≈ n2/2 ∈ Θ(n2)

Σ1≤i≤n i2 = 12+22+…+n2 = n(n+1)(2n+1)/6 ≈ n3/3 ∈ Θ(n3)

Σ0≤i≤n ai = 1 + a +…+ an = (an+1 - 1)/(a - 1) for any a ≠ 1


In particular, Σ0≤i≤n 2i = 20 + 21 +…+ 2n = 2n+1 - 1 ∈ Θ(2n )

Σ(ai ± bi ) = Σai ± Σbi Σcai = cΣai Σl≤i≤uai = Σl≤i≤mai + Σm+1≤i≤uai


u

∑1 = (u − l + 1)
i =l
Examples on Summation
• 1 + 3 + 5 + 7 + …. + 999

• 2 + 4 + 8 + 16 + … + 1024
Example 1: Finding Max. Element

• The basic operation is the comparison executed on each repetition of


the loop.
• In this algorithm, the number of comparisons is the same for all arrays
of size n.
• The algorithm makes one comparison on each execution of the loop,
which is repeated for each value of the loop’s variable i within the
bounds 1 and n-1 (inclusively). Hence,

Note: Best case = Worst case for this problem


Example 2: Sequential Key Search

/* Assume the second condition will not


be executed if the first condition evaluates to
false */

• Worst-Case: Cworst(n) = n = Θ(n)


• Best-Case: Cbest(n) = 1
Example 3: Element Uniqueness Problem

Best-case situation:
If the two first elements of the array are the same, then we can exit
after one comparison. Best case = 1 comparison.
Worst-case situation:
• The basic operation is the comparison in the inner loop. The worst-
case happens for two-kinds of inputs:
– Arrays with no equal elements
– Arrays in which only the last two elements are the pair of equal
elements
Example 3: Element Uniqueness Problem
• For these kinds of inputs, one comparison is made for each repetition
of the innermost loop, i.e., for each value of the loop’s variable j
between its limits i+1 and n-1; and this is repeated for each value of the
outer loop i.e., for each value of the loop’s variable i between its limits 0
and n-2. Accordingly, we get,

= Θ(n2)
Example 4: Insertion Sort
• Given an array A[0…n-1], at any time, we have the array
divided into two parts: A[0,…,i-1] and A[i…n-1].
– The A[0…i-1] is the sorted part and A[i…n-1] is the unsorted part.
– In any iteration, we pick an element v = A[i] and scan through the
sorted sequence A[0…i-1] to insert v at the appropriate position.
• The scanning is proceeded from right to left (i.e., for index j
running from i-1 to 0) until we find the right position for v.
• During this scanning process, v = A[i] is compared with A[j].
• If A[j] > v, then we v has to be placed somewhere before A[j] in the
final sorted sequence. So, A[j] cannot be at its current position (in
the final sorted sequence) and has to move at least one position to
the right. So, we copy A[j] to A[j+1] and decrement the index j, so
that we now compare v with the next element to the left.

• If A[j] ≤ v, we have found the right position for v; we copy v to


A[j+1]. This also provides the stable property, in case v = A[j].
Insertion Sort
Pseudo Code and Analysis
Best Case (if the array
is already sorted): the
element v at A[i] will be just
compared with A[i-1] and
since A[i-1] ≤ A[i] = v, we
retain v at A[i] itself and
do not scan the rest of the
sequence A[0…i-1]. There
is only one comparison
for each value of index i.
n −1

∑1 = (n − 1) − 1 + 1 = (n − 1)
i =1
The comparison A[j] > v is the basic operation. = Θ(n)
Worst Case (if the array is reverse-sorted): the element v at A[i] has to be moved
all the way to index 0, by scanning through the entire sequence A[0…i-1].
n −1 0 n −1 i −1 n −1 n −1
n(n − 1)
∑ ∑ 1 = ∑ ∑1 = ∑ (i − 1) − 0 + 1 = ∑ i =
i =1 j = i −1 i =1 j = 0 i =1 i =1 2
= Θ(n2)
Insertion Sort: Analysis and Example
Average Case: On average for a random input sequence, we would be visiting half
of the sorted sequence A[0…i-1] to put A[i] at the proper position.
n −1 ( i −1) / 2 n −1 n −1
(i − 1) (i + 1)
C (n) = ∑ ∑ 1 = ∑ + 1 = ∑ =Θ(n 2 )
i =1 j =i −1 i =1 2 i =1 2
Example: Given sequence (also initial): 45 23 8 12 90 21
Index
-1 Iteration 1 (v = 23): Iteration 4 (v = 90):
45 45 8 12 90 21 8 12 23 45 90 21
23 45 8 12 90 21 9 12 23 45 90 21
Iteration 2 (v = 8):
Iteration 5 (v = 21):
23 45 45 12 90 21
9 12 23 45 90 90
23 23 45 12 90 21
9 12 23 45 45 90
8 23 45 12 90 21
9 12 23 23 45 90
Iteration 3 (v = 12): 9 12 21 23 45 90
8 23 45 45 90 21
8 23 23 45 90 21 The colored elements are in the sorted sequence
8 12 23 45 90 21 and the circled element is at index j of the algorithm.
Time Efficiency of Recursive
Algorithms: General Plan for Analysis
• Decide on a parameter indicating an input’s size.

• Identify the algorithm’s basic operation.

• Check whether the number of times the basic op. is executed may vary
on different inputs of the same size. (If it may, the worst, average, and
best cases must be investigated separately.)

• Set up a recurrence relation with an appropriate initial condition


expressing the number of times the basic op. is executed.

• Solve the recurrence (or, at the very least, establish its solution’s order
of growth) by backward substitutions or another method.
Recursive Evaluation of n!
Definition: n ! = 1 ∗ 2 ∗ … ∗(n-1) ∗ n for n ≥ 1 and 0! = 1
• Recursive definition of n!: F(n) = F(n-1) ∗ n for n ≥ 1 and
F(0) = 1

M(n-1) = M(n-2) + 1; M(n-2) = M(n-3)+1

M(n) = [M(n-2)+1] + 1 = M(n-2) + 2 = [M(n-3)+1+2] = M(n-3) + 3


= M(n-n) + n = n
Overall time Complexity: Θ(n)
YouTube Link: https://www.youtube.com/watch?v=K25MWuKKYAY
Counting the # Bits of an Integer

# bits (n) = # bits( ) + 1; for n > 1


# bits (1) = 1
Either Division or Addition could be considered the
Basic operation, as both are executed once for each
recursion. We will treat “addition” as the basic operation.
Let A(n) be the number of additions needed to compute # bits(n)

# Additions
Since the recursive calls end when n is equal to 1 and there are no additions
made, the initial condition is: A(1) = 0.
Counting the # Bits of an Integer
Solution Approach: If we use the backward substitution method (as we did in
the previous two examples, we will get stuck for values of n that are not powers
of 2).
We proceed by setting n = 2k for k ≥ 0.
New recurrence
relation to solve:
Examples for
Solving
Recurrence
Relations
Master Theorem to Solve
Recurrence Relations
• Assuming that size n is a The same results hold good for O and Ω too.
power of b to simplify analysis, Examples:
we have the following
1) T(n) = 4T(n/2) + n
recurrence for the running a = 4; b = 2; d = 1  a > bd
time, T(n) = a T(n/b) + f(n) ( )
T (n) = Θ n log 2 4 = Θ(n 2 )
– where f(n) is a function that
accounts for the time spent on 2) T(n) = 4T(n/2) + n2
dividing an instance of size n a = 4; b = 2; d = 2  a = bd
into instances of size n/b and
combining their solutions.
(
T (n) = Θ n 2 log n )
• Master Theorem: 3) T(n) = 4T(n/2) + n3
a = 4; b = 2; d = 3  a < bd
( )
T ( n) = Θ n 3

4) T(n) = 2T(n/2) + 1
a = 2; b = 2; d = 0  a > bd
( )
T (n) = Θ n log 2 2 = Θ(n)
Master Theorem: More Problems
Space-Time Tradeoff
In-place vs. Out-of-place Algorithms
• An algorithm is said to be “in-place” if it uses a minimum
and/or constant amount of extra storage space to
transform or process an input to obtain the desired output.
– Depending on the nature of the problem, an in-place algorithm may
sometime overwrite an input to the desired output as the algorithm
executes (as in the case of in-place sorting algorithms); the output
space may sometimes be a constant (for example in the case of
string-matching algorithms).
• Algorithms that use significant amount of extra storage
space (sometimes, additional space as large as the input
– example: merge sort) are said to be out-of-place in
nature.
• Time-Space Complexity Tradeoffs of Sorting Algorithms:
– In-place sorting algorithms like Selection Sort, Bubble Sort, Insertion Sort
and Quick Sort have a worst-case time complexity of Θ(n2).
– On the other hand, Merge sort has a space-complexity of Θ(n), but has a
worst-case time complexity of Θ(nlogn).
Hashing
• A very efficient method for implementing a dictionary, i.e., a set with
the operations: find, insert and delete
• Based on representation-change and space-for-time tradeoff ideas
• We consider the problem of implementing a dictionary of n records with
keys K1, K2, …, Kn.
• Hashing is based on the idea of distributing keys among a one-
dimensional array H[0…m-1] called a hash table.
– The distribution is done by computing, for each of the keys, the value of
some pre-defined function h called the hash function.
– The hash function assigns an integer between 0 and m-1, called the hash
address to a key.
– The size of a hash table m is typically a prime integer.
• Typical hash functions
– For non-negative integers as key, a hash function could be h(K)=K mod m;
– If the keys are letters of some alphabet, the position of the letter in the
alphabet (for example, A is at position 1 in alphabet A – Z) could be used as
the key for the hash function defined above.
– If the key is a character string c0 c1 … cs-1 of characters from an alphabet,
then, the hash function could be:
Collisions and Collision Resolution
If h(K1) = h(K2), there is a collision

• Good hash functions result in fewer collisions


but some collisions should be expected
• In this module, we will look at open hashing that
works for arrays of any size, irrespective of the
hash function.
Open Hashing
Open Hashing
• Inserting and Deleting from the hash table is of the same
complexity as searching.
• If hash function distributes keys uniformly, average length of
linked list will be α = n/m. This ratio is called load factor.
• Average-case number of key comparisons for a successful search
is α/2; Average-case number of key comparisons for an
unsuccessful search is α.
• Worst-case number of key comparisons is Θ(n) – occurs if we get
a linked list containing all the n elements hashing to the same
index. To avoid this, we need to be careful in selecting a proper
hashing function.
– Mod-based hashing functions with a prime integer as the divisor are more
likely to result in hash values that are evenly distributed across the keys.
• Open hashing still works if the number of keys, n > the size of
the hash table, m.
Applications of Hashing (1)
Finding whether an array is a Subset of another array
• Given two arrays AL (larger array) and AS (smaller array) of distinct
elements, we want to find whether AS is a subset of AL.
• Example: AL = {11, 1, 13, 21, 3, 7}; AS = {11, 3, 7, 1}; AS is a subset of AL.
• Solution: Use (open) hashing. Hash the elements of the larger array, and
for each element in the smaller array: search if it is in the hash table for
the larger array. If even one element in the smaller array is not there in
the larger array, we could stop!
• Time-complexity:
– Θ(n) to construct the hash table on the larger array of size n, and another Θ(n)
to search the elements of the smaller array.
– A brute-force approach would have taken Θ(n2) time.
• Space-complexity: Θ(n) with the hash table approach and Θ(1) with the
brute-force approach.

• Note: The above solution could also be used to find whether two sets are
disjoint or not. Even if one element in the smaller array is there in the
larger array, we could stop!
Applications of Hashing (1)
Finding whether an array is a Subset of another array
• Example 1: AL = {11, 1, 13, 21, 3, 7}; 0 1 2 3 4
• AS = {11, 3, 7, 1}; AS is a subset of AL.
• Let H(K) = K mod 5.
11 7 13
Hash table approach 1 3
# comparisons = 1 (for 11) + 2 (for 3) + 21
1 (for 7) + 2 (for 1) = 6
Brute-force approach: Pick every element in the smaller array and do a linear
search for it in the larger array.
# comparisons = 1 (for 11) + 5 (for 3) +
6 (for 7) + 2 (for 1) = 14

• Example 2: AL = {11, 1, 13, 21, 3, 7}; The hash table approach


• AS = {11, 3, 7, 4}; AS is NOT a subset of AL. would take just 1 (for 11) +
2 (for 3) + 1 (for 7) + 0 (for 4)
• Let H(K) = K mod 5. = 4 comparisons
The brute-force approach would take: 1 (for 11) + 5 (for 3) + 6 (for 7) + 6 (for 4)
= 18 comparisons.
Applications of Hashing (1)
Finding whether two arrays are disjoint are not
• Example 1: AL = {11, 1, 13, 21, 3, 7}; 0 1 2 3 4
• AS = {22, 25, 27, 28}; They are disjoint.
• Let H(K) = K mod 5.
11 7 13
Hash table approach 1 3
# comparisons = 1 (for 22) + 0 (for 25) + 21
1 (for 27) + 3 (for 28) = 5
Brute-force approach: Pick every element in the smaller array and do a linear
search for it in the larger array.
# comparisons = 6 comparisons for each element * 4 = 24

The hash table approach


• Example 2: AL = {11, 1, 13, 21, 3, 7};
would take just 1 (for 22) +
• AS = {22, 25, 27, 1}; They are NOT disjoint. 0 (for 25) + 1 (for 27) + 2 (for
• Let H(K) = K mod 5. 1) = 4 comparisons
The brute-force approach would take: 6 (for 22) + 6 (for 25) + 6 (for 27) + 2 (for 1)
= 20 comparisons.
Applications of Hashing (2)
Finding Consecutive Subsequences in an Array
• Given an array A of unique integers, we want to find the
contiguous subsequences of length 2 or above as well as the
length of the largest subsequence.
• Assume it takes Θ(1) time to insert or search for an element
in the hash table.
36 41 56 35 44 33 34 92 43 32 42
36 41 56 35 44 33 35 40 55 34 43 32 33 91 42 31 41
34 92 43 32 42 42 57 93 33
H(K) = K mod 7 43 34
44 35
0 1 2 3 4 5 6 45 36
37
56 36 44 32 33 41
41 32
35 92 34
42 33
42 43
43 34
44 35
36
Applications of Hashing (1)
Finding Consecutive Subsequences in an Array
• Algorithm
Insert the elements of A in a hash table H
Largest Length = 0
for i = 0 to n-1 do
if (A[i] – 1 is not in H) then
j = A[i] // A[i] is the first element of a possible cont. sub seq.
j=j+1
L searches in the Hash table H for
while ( j is in H) do
sub sequences of length L
j=j+1
end while
if ( j – A[i] > 1) then // we have found a cont. sub seq. of length > 1
Print all integers from A[i] … (j-1)
if (Largest Length < j – A[i]) then
Largest Length = j – A[i]
end if
end if
end if
end for
Applications of Hashing (2)
Finding Consecutive Subsequences in an Array
• Time Complexity Analysis
• For each element at index i in the array A we do at least one search (for
element A[i] – 1) in the hash table.
• For every element that is the first element of a sub seq. of length 1 or
above (say length L), we do L searches in the Hash table.
• The sum of all such Ls should be n.
• For an array of size n, we do n + n = 2n = Θ(n) hash searches. The first
‘n’ corresponds to the sum of all the lengths of the contiguous sub
sequences and the second ‘n’ is the sum of all the 1s (one 1 for each
element in the array)
36 41 56 35 44 33
34 92 43 32 42 36 41 56 35 44 33 34 92 43 32 42
H(K) = K mod 7 35 40 55 34 43 32 33 91 42 31 41
42 57 93 33
0 1 2 3 4 5 6 43 34
44 35
56 36 44 32 33 41 45 36
35 92 34 37
42 43

You might also like