Unit 1
Unit 1
What is an Algorithm?
The word “algorithm” came from the name of a Persian Mathematician Abu Ja’far
Mohammed ibn Musa al Khowarizmi.
Computational Procedures are the algorithms that are definite and effective.
Popular example for this is OS, which does not terminate, and waits for the next
job.
Now, the proof consists of showing, for every legal input, these two forms are
equivalent.
Profiling: It is the process of executing the correct program on data sets and
measuring the time and space taken by it to get the results.
This helps in determining the efficiency of the program.
Algorithm Specification
Pseudocode Conventions:
An algorithm can be expressed or described in many ways, the following two are
popular :
1> Using natural language like English – the resulting instructions may be
ambiguous i.e., not definite.
2> Using the graphic representations, flowcharts – suitable if the algorithm is small
and simple.
So, a pseudocode, that resembles the syntax of Pascal, C and/or C++ languages is
required to express algorithms. The conventions used are :
1> Comments begin with // and continue until the end of line.
2> Simple statement is delimited by ;.
A collection of simple statements, called as a compound statement, can be
represented as block.
The body of a procedure is also represented as a block.
Blocks are indicated with matching braces { and }.
Ex:
{ simple statement 1;
simple statement 2; …
simple statement n;
}
3> An identifier or a variable-name begins with a letter.
From the context, the data type and scope of the variable is evident. So, they are
not explicitly specified.
The assumed simple data types are integer, float, char, boolean and so on.
The compound data types can be formed with records, as shown below:
node = record
{ datatype _1 data_1;
.
.
.
datatype_n data_n;
node *link;
}
Individual data items of a record can be accessed with and . (period).
For example, if ‘p’ is a pointer to a record type node, then p data_1 stands for
the value of the first field. If ‘q’ is a record of type node, then q.data_1 denote the
value of the first field
7> The looping statements for, while and repeat-until are used to repeat
statement(s).
while: The while loop is of the following form:
while condition do while (j < 10) do
{ statement 1; {
. sum := sum + j;
. j := j + 1;
. }
statement n;
}
The statement can be simple or compound.
The value of ‘condition’ is evaluated at the top of the loop, if it true, then the loop
gets executed, otherwise, the loop is exited.
for: The for loop is of the following form:
for variable := value1 to value2 step modifier do for i:= 1 to 10 step 2 do
{ statement 1; {
. sum := sum + i;
. }
.
statement n;
}
The value1, value2 and modifier are arithmetic expressions (constants also
included).
The condition, (variable ≤ value2) is tested at the beginning of each iteration, if it is
true, the loop continues, otherwise it is exited.
The clause ‘step modifier’ is optional. If not there, an increment of +1 is assumed.
The modifier can be either positive or negative.
The break instruction can be used within the loop statements, to force exit. In the
case of nested loops, there will be an exit from the innermost loop.
The return statement results in, not only exiting from the loops and also from the
function itself.
The then and/or else part of an if statement can have another if statement. This
case is called as nested if statement. This is used for multiple condition checking.
For multiple condition checking, case statement is used and its form is:
case
{
: condition 1 : statement 1;
: condition 2 : statement 2;
.
.
.
: condition n : statement n;
: else : statement n+1;
}
9> The read and write instructions are used for input and output operations
respectively. No format is used to specify the type and size of input and output
quantities.
10> Algorithm is the only type of procedure. An algorithm consists of two parts,
namely header and body.
Simple variables are passed by value and, the arrays & records are passed by
reference i.e., as a pointer to the respective data type.
An algorithm may or may not return any value.
Ex : The algorithm for finding and returning, the maximum of n given numbers.
Algorithm Max (A, n)
{
Maxi := A[1];
for i:= 2 to n do
if A[i] > Maxi then Maxi := A[i];
return Maxi;
}
Here, A and n are formal parameters. Maxi and i are local variables.
Selection Sort Example
(With Minimum Element)
8 4 6 9 2 3 1 1 2 3 4 9 6 8
1 4 6 9 2 3 8 1 2 3 4 6 9 8
1 2 6 9 4 3 8 1 2 3 4 6 8 9
1 2 3 9 4 6 8 1 2 3 4 6 8 9
Selection Sort Example (With Maximum Element)
Before sorting 14 2 10 5 1 3 17 7
After pass 1 14 2 10 5 1 3 7 17
After pass 2 7 2 10 5 1 3 14 17
After pass 3 7 2 3 5 1 10 14 17
After pass 4 1 2 3 5 7 10 14 17
Ex: Selection Sort – An algorithm for sorting a collection of n ≥ 1 elements of
arbitrary type.
A simple solution can be stated as:
From those elements that are currently unsorted, find the smallest and
place it next in the sorted list.
Even though, the above statement adequately describes the sorting, it is not an
algorithm and leaves the following questions unanswered.
Now, let us assume that the elements are stored in an array a, such that the ith
element is stored at a[i], 1 ≤ i ≤ n.
With the above assumptions the modified algorithm is:
for i := 1 to n do
{
Examine a[i] to a[n] to find the smallest element, let it be at a[j];
Interchange a[i] and a[j];
}
The above algorithm, involves two subtasks:
1> Finding the smallest element (say a[j]).
2> Interchanging it with a[i].
For the first subtask :
- Initially assume a[i] as minimum.
- Now compare it with a[i+1], a[i+2], …, whenever smaller element is found,
make it new minimum.
- The above process is continued till a[n] is compared. Let the minimum element
is a[j].
12
6 10 24 36
12
6 10 24 3
6
12
Example
input array
5 2 4 6 1 3
sorted unsorted
Bubble Sort
o Compare each element (except the last one) with its
neighbor to the right
If they are out of order, swap them
This puts the largest element at the very end
The last element is now in the correct and final place
7 2 8 5 4 2 7 5 4 8 2 5 4 7 8 2 4 5 7 8
2 7 8 5 4 2 7 5 4 8 2 5 4 7 8 2 4 5 7 8
2 7 8 5 4 2 5 7 4 8 2 4 5 7 8 (done)
2 7 5 8 4 2 5 4 7 8
2 7 5 4 8
Recursive Algorithms:
There are two types of recursion:
1> Direct Recursion
2> Indirect Recursion
Compute 5!
L16 27
f(5)=
5·f(4)
L16 28
f(4)= f(5)=
4·f(3) 5·f(4)
L16 29
f(3)= f(4)= f(5)=
3·f(2) 4·f(3) 5·f(4)
L16 30
f(2)= f(3)= f(4)= f(5)=
2·f(1) 3·f(2) 4·f(3) 5·f(4)
L16 31
f(1)= f(2)= f(3)= f(4)= f(5)=
1 2·f(1) 3·f(2) 4·f(3) 5·f(4)
L16 32
2·1= f(3)= f(4)= f(5)=
2 3·f(2) 4·f(3) 5·f(4)
L16 33
3·2= f(4)= f(5)=
6 4·f(3) 5·f(4)
L16 34
4·6= f(5)=
24 5·f(4)
L16 35
5·24=
120
L16 36
Return 5!
= 120
L16 37
Fibonacci number w/o
recursion
Algorithm fib(n)
{
f[0] := 0; f[1] := 1;
for i := 2 to n
f[i] := f[i-1] + f[i-2];
return f[n];
}
Fibonacci Numbers
Now, to develop recursive algorithms, let us consider the following two problems :
1> Towers of Hanoi problem
2> Permutation Generator problem – generates possible permutations for a list of
elements.
1> Towers of Hanoi Problem : This problem is based on the ancient Tower of
Brahma ritual.
According to this :
- When the world was created, there was a diamond tower (A) with 64 golden disks.
- The disks were placed on the tower in decreasing order of size from bottom to top.
- Besides this tower, there were two other diamond towers (B & C).
- From the creation of world, the Brahman priests have been attempting to move the
disks from A to B using C for intermediate storage.
- Since the disks are very heavy, they can be moved only one at a time.
- At any time, a disk can not be on the top of a smaller disk.
- According to legend, the world will come to an end, if the priests complete the task.
The efficient algorithm for this problem can be obtained using the recursion.
- Assume that there are n disks, to get the largest disk to the bottom of B, the
remaining n-1 disks are moved to C.
- Now, the remaining disks have to be moved from C to B, using A and B.
For a given set of n elements, there will n! different permutations. This problem can
be defined recursively.
Let us take a set of four elements {a, b, c, d}. The solution can be defined as :
1> a followed by all the permutations of (b, c, d).
2> b followed by all the permutations of (a, c, d).
3> c followed by all the permutations of (a, b, d).
4> d followed by all the permutations of (a, b, c).
The recursive algorithm for this problem is :
Algorithm Perm (a, k, n)
{ if (k = n) then write (a[1:n]);
else
for j := k to n do
{
t := a[k]; a[k] := a[j]; a[j] := t;
Perm(a, k + 1, n);
t := a[k]; a[k] := a[j]; a[j] := t;
}
}
Performance Analysis
An algorithm evaluation can be based upon many criteria, some of them are :
1> Does it do what we want it to do?
2> Does it work correctly according to the original specifications of the task / problem?
3> Is there documentation that describes how to use it and how it works?
4> Are procedures created in such a way that they perform logical sub-functions?
5> Is the code readable?
The above criteria are very important in writing software, especially for large
systems. These criteria are automatically met in our algorithms.
The other criteria, which have a more direct relationship to evaluate the
performance of algorithms are :
- Computing time
- Storage requirements.
Space Complexity :
The space needed by an algorithm, is the sum of the following two components :
1> A fixed part, independent of the characteristics such as the size and number of
inputs and outputs. It includes :
- The instruction space
- Space for simple variables
- Space for fixed-size compound variables
- Space for constants and so on.
2> A variable part, is the space needed by component variables, whose size is
dependent on the instance of a problem being solved. It includes :
- The space needed by the referenced variables
- The recursion stack space.
This algorithm instance uses the specific values of a, b, and c. If one word is
required for each variable, the total space needed is ≤ 5.
SSum (n) = n.
S(Sum) ≥ (n + 3).
Ex : Write an algorithm, that computes
recursively, where a[i]s are real numbers. Also estimate the space
requirements.
Algorithm SumR(a, n)
{
if (n 0) then return 0.0;
else return SumR(a, n-1) + a[n];
}
The problem instances for this algorithm are characterized by :
- The value of n
- The recursion stack space, consists the space for :
Formal parameters
Local variables
Return address (Let us assume one word)
Since the compiled program can be executed many times, only the execution time
is considered and it is denoted by tP (instance characteristics).
So, only the estimation for tP can be made and the expression is :
tP (n) = Ca ADD(n) + Cs SUB(n) + Cm MUL(n) + Cd DIV(n) + …
where
- n denotes the instance characteristics.
- Ca, Cs, Cm, Cd, and so on, denote time needed for an addition, subtraction,
multiplication, division, and so on respectively.
- ADD, SUB, MUL, DIV, and so on are functions.
The above algorithm can be simplified with only the count increments, as shown
below:
Algorithm Sum(a, n)
{
for j := 0 to n-1 do
count := count + 2;
count := count + 3;
}
Ex : Now let us consider the algorithm SumR, which recursively finds the sum of n
numbers, stored in an array.
Algorithm SumR(a, n)
{ count := count + 1;
if (n ≤ 0) then
{ count := count + 1;
return 0.0;
}
else
{count := count + 1;
return SumR(a, n-1) + a[n];
}
}
The step count for the above algorithm can be expressed as recursive formula as
given below :
tSumR(n) =
The step count informs the changes in the run time for a program with the changes
in the instance characteristics.
For example, in 2n + 2, if the n increases by the factor of 10, then the run time
also increases by the factor of 10. So, the runtime grows linearly in n.
The input size is one of the instance characteristics, that is frequently used.
For any problem instance, the input size is the number of words / elements
needed to describe that instance.
For example, for the problem of summing an array with n elements, the input size
is n + 1.
Now, let us consider the problem of adding two m X n matrices a and b together.
The concerned algorithm is :
Algorithm Add(a, b, c, m, n)
{
for i:= 1 to m do
for j:= 1 to n do
c[i, j] = a[i, j] + b[i, j];
}
After introducing the count incrementing statements, the above algorithm becomes :
Algorithm Add(a, b, c, m, n)
{
for i:= 1 to m do
{ count := count + 1;
for j:= 1 to n do
{ count := count + 1;
c[i, j] = a[i, j] + b[i, j];
count := count + 1;
}
count := count + 1;
}
count := count + 1:
}
The above algorithm can be further simplified with only count incrementing
statements as shown below :
Algorithm Add(a, b, c, m, n)
{
for i:= 1 to m do
{ count := count + 2;
for j:= 1 to n do
count := count + 2;
}
count := count + 1:
}
If m > n, the two for loops are interchanged to make the step count as
2mn + 2n + 1.
Algorithm Sum(a, n) 0 - 0
{ 0 - 0
s := 0.0; 1 1 1
for j := 0 to n-1 do 1 n+1 n+1
s := s + a[j]; 1 n n
return s; 1 1 1
} 0 - 0
Total 2n + 3
Ex : Find the step count for the algorithm, which finds the sum of n numbers stored in
an array recursively, using tabulation method.
tSumR(n) =
Total 2 2+x
Note : x = tSumR(n – 1)
Ex : Find the step count for the algorithm, which adds two m x n matrices, using
tabulation method.
Algorithm Add(a, b, c, m, n) 0 - 0
{ 0 - 0
for i:= 1 to m do 1 m+1 m+1
for j:= 1 to n do 1 m (n + 1) mn + m
c[i, j] = a[i, j] + b[i, j]; 1 mn mn
} 0 - 0
Total 2mn + 2m + 1
Once, sufficient experience is earned, the construction of the frequency table can be
avoided.
After determining the instance characteristics, which influence the step count, the
step count can be determined by using either of the above two methods.
In some algorithms, the chosen parameters are not adequate to determine the step
count uniquely.
For these algorithms, the step counts are determined in three cases :
- Best case
- Worst case
- Average case.
The best-case step count is the minimum number of steps that can be executed
for the given parameters.
The worst-case step count is the maximum number of steps that can be
executed for the given parameters.
The average-case step count is the average number of steps executed on
instances with the given parameters.
The break-even point is the value of n, beyond which the performances of the
algorithms switch.
Asymptotic Notation :
This notation is used to make meaningful statements about the time and space
complexities of an algorithm.
Let us assume that the functions f and g are nonnegative.
O (Big “oh”) – Notation : The function f(n) = O(g(n)) iff there exists positive
constants c and n0 such that f(n) ≤ c * g(n) for all n, n ≥ n0. (= means is)
100.
3n + 2 ≠ O(1) is not less than or equal to any constant c for all n ≥ n0.
The meaning of these notations are :
O(1) is a constant.
O(n) is called linear.
O(n2) is called quadratic.
O(n3) is called cubic.
O(2n) is called exponential.
O(log n) is faster for sufficiently large n, but less than O(n).
O(n log n) is better than O(n2), but not as good as O(n).
According to this notation, g(n) is an upper bound on the value of f(n) for all n,
n≥n0. So, g(n) should be as small as possible for which f(n) = O(g(n)).
3n + 3 = Ω(1)
10n2 + 4n + 2 = Ω(n)
10n2 + 4n + 2 = Ω(1)
6 * 2n + n2 = Ω(n2)
6 * 2n + n2 = Ω(n)
6 * 2n + n2 = Ω(1)
It is observed from the above examples that, there are several functions g(n) for
which f(n) = Ω(g(n)).
Here, the function g(n) is a lower bound on the f(n). So, g(n) should be as large
a function of n as possible for which f(n) = Ω(g(n)) is true.
θ (Theta) – Notation : The function f(n) = θ(g(n)) iff there exist positive
constants c1, c2, and n0 such that c1g(n) ≤ f(n) ≤ c2g(n) for all n, n ≥ n0.
3n + 3 = θ(n)
10n2 + 4n + 2 = θ(n2)
6 * 2n + n2 = θ(2n)
3n + 2 ≠ θ(1)
3n + 3 ≠ θ(n2)
10n2 + 4n + 2 ≠ θ(n)
10n2 + 4n + 2 ≠ θ(1)
The function f(n) = θ(g(n)) iff g(n) is both an upper and lower bound on f(n).
=0
3n + 3 = o(n log n)
6 * 2n + n2 = o(3n)
6 * 2n + n2 = o(2n log n)
=0
Asymptotic complexity of Sum :
Statement s/e Frequency Total Steps
Total Θ(n)
Total Θ(mn)
The growth of the various functions, with the value of n is tabulated as below :
log n n n log n n2 n3 2n
0 1 0 1 1 2
1 2 2 4 8 4
2 4 8 16 64 16
3 8 24 64 512 256
4 16 64 256 4,096 65,536
5 32 160 1,024 32,768 4,29,49,67,296
The plot of the function values is as shown below :
DIVIDE – AND - CONQUER
General Method
In divide-and-conquer strategy, the n inputs of a given function are
splitted into k distinct subsets, 1 < k ≤ n, resulting in k subproblems.
If the subproblems are still large, they will be further subdivided. This
process continues till the obtained subproblem is easily solvable without
further division.
Usually, the subproblems resulting from this strategy are of the same type
as the original problem. So recursion is the suitable technique.
Small(P) is a function that determines whether the input size is small enough to
compute the answer without splitting and returns a boolean value.
Let the size of P be n and the sizes of the k subproblems are n 1, n2, … ,nk
respectively. The computing time of DAndC is specified as recurrence relation.
(1)
Here,
- T(n) is the time taken by DAndC for an input size of n.
- g(n) is the time taken for computation on small inputs.
- f(n) is the time required for splitting P and merging the solutions of the
subproblems.
(2)
Beginning with the recurrence relation (2) and using the substitution
method, it can be shown that
T(n) = nlogb a [T(1) + u(n)]
Where
The following table gives the asymptotic value of u(n) for various values
of h(n).
h(n) u(n)
O(nr), r < 0 O(1)
Θ((log n)i), i ≥ 0 Θ((log n)i + 1 / (i + 1))
Ω(nr), r > 0 Θ(h(n))
The above table helps in obtaining the asymptotic value of T(n) for many
recurrences, resulted in analysis of divide-and-conquer algorithms.
So, logba = 0 and h(n) = f(n) / nlogba = c = c(log n)0 = Θ((log n)0).
If x is present in the list, then determine a value j such that aj = x. If x is not in the
list, then j is set to 0.
If P has more than one element, then it has to be divided into a new subproblems.
Pick an index q (in the range [i, l] as a middle index).
There are three possibilities:
1> x = aq: The problem P is solved.
2> x < aq: The x has to be searched in the sublist of the range [i, q – 1].
3> x > aq: The x has to be searched in the sublist of the range [q + 1, l].
The following algorithm BinSrch describes the binary search and has four inputs a[],
i, l and x.
Algorithm BinSrch(a,i,l,x)
{
if (l = i) then // If Small (P)
{
if (x = a[i]) then return i;
else
return 0;
}
else
{ // Reduce P into a smaller subproblems.
mid := [(i + l)/2)];
if (x = a[mid]) then return mid;
else if (x < a[mid]) then
return BinSrch(a, i, mid – 1, x);
else
return BinSrch(a, mid + 1, l, x);
}
}
The circular nodes are called internal nodes, and square nodes are called
external nodes.
The successful search terminates an internal node, and the unsuccessful
at an external node.
Theorem : If n is in the range [2k – 1, 2k), then BinSearch makes at most k
element comparisons for a successful search and either k – 1 or k
comparisons for an unsuccessful search. (The time taken for successful
search is O(log n) and for an unsuccessful search is Θ(log n)).
Proof : Let us consider the binary decision tree, which describes the
action of BinSearch on n elements.
All successful searches end at a circular node and unsuccessful searches
end at a square node.
If 2k – 1 ≤ n < 2k, then all circular nodes are at levels 1, 2, …, k and the
square nodes at k and k + 1.
To determine the average behaviour, the size of the binary decision tree is
equated to the number of element comparisons in the algorithm.
The distance of a node from the root is one less than its level.
The internal path length I is the sum of the distances of all internal
nodes from the root.
The external path length E is the sum of the distances of all external
nodes from the root.
For any binary tree with n internal nodes, E and I are related by the
following formula.
E = I + 2n.
There is a simple relationship among E, I, and the average number of
comparisons in binary search.
Let As(n) be the average number of comparisons in a successful search.
Au(n) be the average number of comparisons in an unsuccessful search.
For an internal node, the number of comparisons needed is one more than
the distance of it from the root.
As(n) = 1 + I / n.
The minimum value for As(n) and Au(n) can be achieved by an algorithm
whose binary decision tree has minimum external and internal path
length.
For this, the binary decision tree should have external nodes at adjacent
levels and this is possible by a tree produced by binary search algorithm.
Since E is proportional to n log n, both As(n) and Au(n) are proportional to
log n.
The best, average and worst cases for successful and unsuccessful
searches are :
Successful Search :
Best Case : Θ(1)
Average Case : Θ(log n)
Worst Case : Θ(log n)
Unsuccessful Search :
Best, Average and Worst Cases : Θ(log n).
Quick Sort
The sorting technique quick sort also uses divide-and-conquer strategy, but it
differs from merge sort.
In merge sort, the file a[1 : n] was divided at its midpoint into subarrays, which
were independently sorted and later merged.
But, in quick sort, the division into subarrays is made in such a way that the sorted
subarrays do not need to be merged later.
In this, the elements in a[1 : n] are rearranged such that a[i] ≤ a[j], for all i
between 1 and m and for all j between m + 1 and n for some m, 1 ≤ m ≤ n.
Now, the elements in a[1 : m] and a[m + 1 : n] are independently sorted. Merge
operation is not needed.
Algorithm Partition(a, m, p)
{
v := a[m]; i :=m; j := p;
repeat
{
repeat
{i := i +1;
}until (a[i] ≥ v);
repeat
{j := j -1;
}until (a[j] ≤ v);
if (i < j) then Interchange(a, i, j);
}until (i ≥ j);
a[m] := a[j]; a[j] := v; return j;
}
Algorithm Interchange(a, i, j)
{
p := a[i];
a[i] := a[j]; a[j] := p;
}
The algorithm for Quick Sort is :
Algorithm QuickSort(p, q)
{
if (p < q) then // If there are more than one element
{
// divide P into two subproblems.
j := Partition(a, p, q + 1);
// j is the position of the partitioning element.
// Solve the subproblems.
QuickSort(p, j – 1);
QuickSort(j + 1, q);
// There is no need for combining solutions.
}
}
(1)
The number of element comparisons required by partition on the first call is n + 1.
CA(0) = CA(1) = 0.
(6)
Repeatedly substituting in (6) for CA(n – 1), CA(n – 2), …
.
. (7)
.
Since
(7) Yields
CA(n) ≤ 2(n + 1) [loge (n + 2) – loge 2] = O(n log n).
So, the worst-case time is O(n2) and the average time is O(n log n).
By using the iterative algorithm for quick sort, the stack space can be reduced to
O(log n).
In this, the smaller of the two subarrays a[p : j – 1] and a[j + 1 : q] is always
sorted first.
The second recursive call is replaced by some assignment statements and a jump to
the beginning of the algorithm.
Let S(n) be the maximum stack space needed and it can be expressed as :
So, S(n) is less than 2 log n. Thus the maximum stack space needed is O(log n).
Algorithm QuickSort2(p, q)
{
// stack is a stack of size 2 log(n);
repeat
{
while (p < q) do
{
j := Partition(a, p, q +1);
if ((j – p) < (q – j)) then
{
Add(j + 1); // Add j +1 to stack
Add(q); q := j – 1; // Add q to stack
}
else
{
Add(p); // Add p to stack.
Add(j – 1); p := j + 1; // Add j – 1 to stack
}
} // Sort the smaller subfile.
if stack is empty then return
Delete (q); Delete(p); // Delete q and p from stack.
} until (false);
}
Merge Sort
In this sorting technique, the given list of n elements are splitted into two
sublists. (This process continues till a sublist of size 1 is produced.)
Now, each sublist is individually sorted and the resulting sorted sequences
are merged, finally to produce a single sorted sequence of n elements.
T(n) = 2
T(n/2) + cn
= 2
[2 T(n/4) + cn/2] + cn
= 4
T(n/4) + 2cn
= 4
[2 T(n/8) + cn/4] + 2cn
= 8
T(n/8) + 3cn
.
.
.
= 2k T(1) + kcn
= an + cn log n
For this, an auxiliary array Link[1 : n] that contains integers in the range [0, n] is
defined along with the original array a[].
These integers are interpreted as pointers to elements of a[]. A list is a sequence of
pointers ending with zero.
2> The algorithm MergeSort uses the stack space because of recursion and this space
is proportional to log n.
This can be avoided by designing an algorithm for merge sort, that works in
bottom-up rather than top-down.
For smaller sized lists, to save stack space, it is better to use insertion sort. The
algorithm is :
Algorithm InsertionSort(a, n)
{
for j := 2 to n do
{
item := a[j]; i:= j – 1;
while (( i ≥ 1) and (item < a[i])) do
{
a[i + 1] := a[i]; i := i – 1;
}
a[i + 1] := item;
}
Now, the modified MergeSort algorithm with links and insertion sort for
the size of less than 15 size is :
then
C11 = A11 B11 + A12 B21
C12 = A11 B12 + A12 B22
C21 = A21 B11 + A22 B21
C22 = A21 B12 + A22 B22