Analysis of Algorithms Notes
Analysis of Algorithms Notes
Analysis of Algorithms Notes
Program: BS(CS)-VI
Course Title Analysis of Algorithms
Course Code: CS-512
Credit Hours: 03
Course Week: 16
Total Credit Hours: 48
Instructor: Noorul Wahab
Course Objectives:
The subject belongs to applied mathematics, which is very important for the
students of computer science. As the Introduction of Algorithms has already been
studied by the students so in this subject they will study the analysis of those
algorithms as well as some more advanced algorithms. It has broad spectrum
implementations in the area of software design because before writing the software it
is required to check the complexity as well as the performance of the algorithms used
in it.
Week-1
-Introduction to Algorithms: Algorithm is a step by step solution to a particular
problem. The word ‘algorithm’ stems from ‘algoritmi’ which is the Latin form of the
name of a Persian mathematician Abu Abdullah Muhammad ibne Musa al-
Khwarizmi. Algorithm can be represented as a pseudo code or a flowchart or be
implemented in a particular programming language.
-Algorithmic Notations: Here is an algorithm for finding the largest of three integer
values.
Algorithm 1.0: (Largest of three integers) Given three integer values NUM1, NUM2,
NUM3 this algorithm finds the value MAX of the largest of these three.
Step 1. Set MAX = NUM1.
Step 2. [Compare and update] If NUM2 > MAX, then:
Set MAX = NUM2.
[End of if structure]
If NUM3 > MAX, then:
Set MAX = NUM3.
[End of if structure]
Step 3. [Print] Write MAX and Exit.
Each step is given a Step number. The algorithm is completed with Exit. Comments
are put inside brackets. For displaying the output we will use Write and for reading
input we use Read.
Parameter: Those variable in the problem statement which are not assigned specific
values are called parameter.
Instance: When we assign some specific values to these parameter then it becomes
one instance of the problem.
Double alternative:
If condition, then:
[Block A]
Else:
[Block B]
[End of If structure]
Multiple alternatives:
If condtion1, then:
[Block A]
Else if condition2, then:
[Block B]
Else if condition3, then:
[Block C]
Else
[Block D]
[End of If structure]
Start
MAX = NUM1
Is NUM2
> MAX
Yes
No
MAX = NUM2
Yes No
MAX = NUM3
Write
MAX
Stop
Repeat-while loop:
Repeat while condition:
[Block]
[End of loop]
The big-Oh notation gives an upper bound on the growth rate of a function.
Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants
c and n0 such that 0 <= f(n) <= cg(n) for n >= n0
Example1: 2n + 10 is O(n)
2n + 10 <= cn
(c - 2) n >= 10
n >= 10/(c - 2)
Pick c = 3 and n0 = 10
The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no more than the
growth rate of g(n).
We can use the big-Oh notation to rank functions according to their growth rate.
-Analysis of algorithms:
Algorithm 1.1: (Largest element of an array) Given an integer array ARR[] with size
n this algorithm finds the value MAX of the largest of all the elements of the ARR[].
operations
Step 1. Set MAX = ARR[0]. 2
Step 2. Repeat for C = 1 to n by 1: 2 + 2n
[Compare and update] If ARR[C] > MAX, then: 2(n-1)
Set MAX = ARR[C]. 2(n-1)
[End of if structure]
[End of loop]
Step 3. [Print] Write MAX and Exit. 1
Total: 6n + 1
Example:
-We determine that algorithm 1.1 executes at most 6n + 1 primitive operations.
-We say that algorithm 1.1 “runs in O(n) time”.
Since constant factors and lower-order terms are eventually dropped anyhow, we can
disregard them when counting primitive operations.
Week-2
-Searching Algorithms:
-Sequential Search:
Problem: Check if the value/key x is in the list L of n elements?
Inputs: n, L, x
Outputs: the location of x in L (0 if x is not in L)
integer seqSearch(integer n, T L[], T x)
{
for I = 1 to N step 1
if L[I] == x
return I
end if
end for
}
Worst case: If x is not in the list then the above algorithm will do N comparisons at
maximum. T(n) = n
Best case: If x is the first element in the list then it will do only 1 comparison.
If p = 1, which means x is in the list as for the above case, then n(1-p/2) + p/2 give
(n+1)/2
If p = 1/2, which means that for x to be in the list there is fifty-fifty chance, then n(1-
p/2) + p/2 give n[1-(1/2)/2] + (1/2)/2
= n(1-1/4) + ¼
= (4n – n)/4 + ¼
= 3n/4 + ¼
= (3n + 1)/4 (on the average 3/4 of the list is searched)
Worst case: If N is 16 the above algorithm will do 5 comparisons to find out that x is
not in L because every time it is slicing the size to be searched into half.
For example if the list is {4, 9, 13, 14, 16, 18, 23, 28, 30, 31, 33, 38, 40, 42, 44, 50}
and we need to find x = 63 in this list.
T(n) = lg n
Pass 1: First pass it will divide the list at Mid = floor[(1 + 16)/2] = 8 and then
compare the 8th element 28 with x (63). As 63 is greater than 28 therefore we do not
need to compare x with the number at index lower than 8 because the list is already in
order. Pass 2: Mid = floor[(9 + 16)/2] = 12
Again 12th element i-e 38 is less than x therefore we need to compare it with the right
half again.
Pass 3: Mid = floor[(13 + 16)/2] = 14
14th element is again less than x
Pass 4: Mid = floor[(15 + 16)/2] = 15
15th element is again less than x
Pass 5: Mid = floor[(16 + 16)/2] = 16
Algorithm fibo1.0:
Problem: Determine the nth term in the Fibonacci sequence.
Inputs: a nonnegative integer n.
Outputs: fib, the nth term of the Fibonacci sequence.
int fib (int n)
{
if (n <= 1)
return n;
else
return fib (n - 1) + fib (n-2);
}
To compute 0th and 1st term we need 1 computation. For 2nd term 3 computations are
required i-e one each for 0th and 1st term and 1 for the 2nd term itself.
nth term in the T(t) Number of terms computed
Notice that for n greater than or equal to 2 the number of terms computed are always
greater than 2n/2
T(n) > 2n/2
We use induction to prove this.
Induction base: To compute a current term we need to consider the previous two
terms.
T(2) = 3 > 22/2
T(3) = 5 > 23/2
Induction hypothesis: Let us suppose T(m) > 2m/2 for all m such that 2 ≤ m < n.
Induction step: We need to prove the above hypothesis for n.
T(n) = T(n-1) + T(n-2) + 1
T(n) > 2(n-1)/2 + 2(n-2)/2 + 1 (by induction hypothesis)
We have changed = to > because we know that 2(n-1)/2 < T(n-1) therefore the whole
expression on the R.H.S must be less than the L.H.S
To substantiate that L.H.S is greater than R.H.S we can diminish the term 2(n-1)/2 to
2(n-2)/2still keeping the > expression intact and similarly we can also drop 1.
T(n) > 2(n-2)/2 + 2(n-2)/2 + 1
T(n) > 2.2(n-2)/2
T(n) > 2(n-2)/2 + 1
T(n) > 2[(n-2) + 2]/2
T(n) > 2n/2
This algorithm computes fib1, fib2 and then loops from 2 to n to find the nth term i-e
it takes n + 1 computations to find the nth term. Now lets us compare both the
algorithms fibo1.0 and fibo1.1. We assume that both the algorithms run on a
Week-3
-Sorting Algorithms: Sorting the input elements in some order like decreasing order
or non-decreasing order.
-Exchange Sort:
Problem: Arrange the elements of a list in ascending order.
Inputs: array A of size n
Outputs: array A with elements sorted in ascending.
void exchangeSort (A[],int n)
{
for (i=1; i<= n; i++)
for (j=i+l; j<= n; j++)
if (A[j] < A[i])
exchange A[i] with A[j];
}
-Other considerations: When applying the theory of complexity analysis one should
also be aware of the time it takes to execute the basic operations, the control
instructions, and other overhead instructions like initialization instructions before the
loops.
Suppose Algo1 and Algo2 are two algorithms to solve the same problem with time
complexities as n and n2 respectively. Algo1 appears more efficient if the basic
operations and the control instructions take the same time in both algorithms. But let
us suppose that Algo2 takes t time to execute the basic operation and Algo1 take
1000.t time to execute the basic operation.
The table below shows that the term n + 300 is of very little effect on the third column
as compared to the second. Therefore we drop the low order terms and the constants
to get the order of a function. For example T(n) for exchange sort is n2/2 - n/2. By
-Complexity order:
-big-Oh: It describes the asymptotic behavior of a function because it is concerned
only with eventual behavior. We say that "big O" puts an asymptotic upper bound on
a function and tells us how fast a function grows or may be declines. The rate of
growth of a function is called its order that is why the letter O is used.
Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants
c and n0 such that 0 <= f(n) <= cg(n) for n >= n0
For example when we say n2 + 10n is O(n2) it means that for some value of n greater
than n0 and a positive real constant c the function n2 + 10n fall beneath n2 and stays
there.
N 2 n ^ 2 n ^ 2 + 10n
1 2 11
2 8 24
3 18 39
4 32 56
5 50 75
6 72 96
7 98 119
8 128 144
9 162 171
10 200 200
11 242 231
12 288 264
13 338 299
14 392 336
15 450 375
5 n2 is Ω(n2)
c n2 <= 5 n2
Take c = 5 and n = 1
5 x 1 x 1 <= 5 x 1 x 1
n2 + 10n Ω(n2)
c n2 <= n2 + 10n
Take c = 1 and n0= 1
1 x 1 x 1 <= 1 x 1 + 10 x 1
1 <= 11
N 1 n ^ 2 n ^ 2 + 10n
1 1 11
2 4 24
3 9 39
4 16 56
5 25 75
6 36 96
7 49 119
8 64 144
9 81 171
10 100 200
11 121 231
12 144 264
13 169 299
14 196 336
15 225 375
-Θ notation (theta): It sandwiches a function inside two bounds. When we say that
f(n) is Θ(g(n)) we mean that g(n) is both a tight upper-bound and a tight lower-bound
on the growth of f(n). So for all n >= n0 the function f(n) is equal to g(n) to within a
constant factor.
Given functions f(n) and g(n), we say that f(n) is Θ(g(n)) if there are positive constants
c1, c2 and n0 such that 0 <= c1g(n) <= f(n) <= c2g(n) for n >= n0
In other words f(n) is Θ(g(n)) only if f(n) is O(g(n)) and f(n) is Ω(g(n))
-Little-oh: To state that f(n) is strictly less than g(n) we use little-oh. The big-O may
or may not give an asymptotically tight bound for example 2n2 = O(n2) is a tight
bound but to say that 2n is O(n2) is not a tight bound. But big-O is usually used as a
tight upper-bound while little-o is used to describe an upper-bound that is not tight.
For any postive constant c>0, there exists a constant n0>0 such that 0 <= f(n) < cg(n)
for n >= n0
This definition means that f grows much slower than g.
The difference between the big-O and little-o is that the big-O has to be true for at
least one positive constant but the latter must be true for every positive constant.
Every function f that is little-o of g is also big-O of g, but not every function that is
big-O of g is little-o of g.
-2n is o(n2)
2n < cn2
Divide both sides by cn
2/c < n
That is we can choose any n > 2/c
Take c = 1, then n should be greater than 2. Let n = 3
2x3<1x3x3
6<9
Now take c = 1, n = 100
2 x 100 < 1 x 100 x 100
200 < 10000
If c = 1 and n = 1000000000
2 x 1000000000 < 1 x 1000000000 x 1000000000
2000000000 < 1000000000000000000
The function f(n) becomes insignificant relative to g(n) as n approaches infinity.
- n2/2 is ω(n)
c.n < n2/2
Divide both sides by n
c < n/2 or 2c < n
Take c = 1, then n should be greater than 2. Let n = 3
1x3<3x3/2
3 < 4.5
Let c = 1, n = 100
1 x 100 < 100 x 100 / 2
100 < 5000
Let c = 1, n = 1000000000
1 x 1000000 < 1000000 x 1000000 / 2
1000000 < 500000000000
The function f(n) becomes arbitrarily large to g(n) as n approaches infinity.
-Properties of order:
f(n) = Ω(g(n)) if and only if g(n) = O(f(n))
f(n) = ω(g(n)) if and only if g(n) = o(f(n))
f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and g(n) = O(f(n))
Week-4
- Complexity analysis
o Time
o Every
o Worst
o Average
-Every-case time complexity: When the basic operation is always done the same
number of times for every instance of size n then its time complexity is called every-
case. Consider the following algorithm that adds all the number in an array:
-Matix multiplication:
Average case (when x is not in the list): Let us p be the probability that x is in the
list. The probability that x is at kth position is p/n such that it is equally likely to be in
any of the slots of the list. The probability that x is not in the list is 1-p and to find out
that x is not in the list we need to loop n times the basic operation.
T(n) = (1 x p/n) + (2 x p/n) + (3 x p/n) + … + (n x p/n) + n(1-p)
= p/n (1 + 2 + 3 + … + n) + n(1-p)
Week-5.6
-Introduction to Computational Complexity: It is the study of all possible
algorithms that can solve a given problem. A computational complexity analysis tries
to determine a lower bound on the efficiency of all algorithms for a given problem.
For example the lower bound of sorting algorithms is Ω(n log n) which means that
non of the sorting algorithms designed so far can do better than n log n. The table
below shows that an algorithm with complexity n2 (such as exchange sort) takes
almost 31 years but an algorithm with complexity n log n (such as merge sort) takes
29 seconds.
Let us assume that one basic operation takes 10-9 seconds (1 nanosecond)
N f(n) = n f(n) = n log n f(n) = n2
10 0.01 microsec 0.033 microsec 0.1 microsec
9
10 1 second 29.9 seconds 31.7 years
-Some sorting algorithms: We will study different sorting algorithms like insertion,
selection, merge and quick sort and will make a comparison between them.
When the extra space required by a sorting algorithm does not increase with the input
size (i-e it is a constant) then such algorithms are called in-place.
-Insertion sort:
-Begin with the second element of the list (next time start with the third and so on).
-Advance the other elements by one position to the right till the correct position of the
selected element is encountered. Insert the selected element at that position.
-Repeat the above steps until all the elements are done.
-Selection sort:
-Find the minimum value in the list
-Swap this minimum value with the element at first position. Once this minimum
value sits in the first position then it is at its correct expected position. Next repetition
will not consider this first element but will rather start with the second element.
-Repeat the above steps for the remainder of the list
Week-7
-Best and Worst time complexity of insertion sort:
1- int numToInsert, insertLoc; cost times
2- for(int i = 2; i <= SIZE; i++) c1 n
Let ti be the number of times the inner for loop test is executed
T(n) = c1n + c2(n-1) + c3(n-1) + c4(Sumi=2 to n ti) + c5(Sumi=2 to n (ti-1)) + c6(Sumi=2
to n (ti-1)) + c7(n-1)
The best case occurs when the elements are already in ascending order. In that case ti
in step 5 becomes 1 because on every iteration of the outer loop the condition
numToInsert < arr[j]) is false. In that case step 6 and 7 are not executed.
T(n) = c1n + c2(n-1) + c3(n-1) + c4(n-1) + c5(0) + c6(0) + c7(n-1)
= c1n + c2(n-1) + c3(n-1) + c4(n-1) + c7(n-1)
= c1n + c2n – c2 + c3n – c3 + c4n- c4 + c7n – c7
= c1n + c2n + c3n + c4n + c7n – c2 – c3 – c4 – c7
= (c1 + c2 + c3 + c4 + c7) n – (c2 + c3 + c4 + c7)
We can express the above as an-b for constants a and b that depend on the statement
costs ci and that is a linear function.
The worst case occurs when the elements are in descending order. In that case we
must compare each element arr[i] with each element in the entire sorted array arr[1…
i-1] so ti = I for 2, 3, …,n
Sumi=2 to n (ti-1) = n(n-1)/2 [Proof by induction]
(2-1) + (3-1) + (4-1) + … + (n-1)
1 + 2 + 3 + … + n-1
= c4n2/2 + c5n2/2 + c6n2/2 + c1n + c2n + c3n + c4n/2 - c5n/2 - c6n/2 + c7n - c2 - c3 + c4 -
c7
= (c4/2 + c5/2 + c6/2) n2+ (c1 + c2 + c3 + c4/2 - c5/2 - c6/2 + c7) n - (c2+ c3 + c4 + c7)
We can express the above as a n2+bn-c for constants a, b and c that depend on the
statement costs ci
Week-8
-Selection sort:
We use indices for the elements of a permutation for example in the permutation (3, 1,
2) k1= 3, k2= 1, and k3= 2
Inversion: An inversion in a permutation is a pair (ki, kj) such that i < j and ki > kj
For example the permutation {5, 3, 7, 2} contains the following inversions:
(5, 3), (5, 2), (3, 2), (7, 2)
If there are no inversions in a permutation then that permutation is in sorted order.
An algorithm that removes at most one inversion after each comparison and sorts n
distinct keys only by comparisons of keys must do at least n(n-1)/2 comparisons in
worst case. To prove this we need to show that there is a permutation with n(n-1)/2
inversions and when that permutation is given as input to the algorithm it will have to
remove that many inversion and therefore do at least that many comparisons. Such a
permutation is the list in descending order {n, n-1, n-2… 3, 2, 1}
Proof by induction:
The permutation {n, n-1, n-2… 3, 2, 1} has n(n-1)/2 inversions
Therefore, if we consider all permutations equally probable for the input, the average
number of inversions in the input is also n(n-1)/4. Because we assumed that the
algorithm removes at most one inversion after each comparison, on the average it
must do at least this many comparisons to remove all inversions and thereby sort the
input.
Insertion Sort removes at most the inversion consisting of S [j] and x after each
comparison, and therefore this algorithm is in the class of algorithms addressed by
worst case complexity of n(n-1)/ 2 and average case complexity of n(n-1)/4. Similarly
Exchange sort and selection sort are also in this class.
For example the permutation {3, 15, 9, 8} has inversions (15, 9), (15, 8), (9, 8)
If we are comparing 15 and 9 and exchange them then the permutation becomes:
{3, 9, 15, 8} which has inversions (9, 8) and (15, 8). Therefore Exchange sort does at
most one inversion after each comparison.
Week-11
-Introduction to divide and conquer approach: This design strategy involves the
following steps:
-Divide an instance of a problem into one or more smaller instances.
-Solve (conquer) each of the smaller instances using recursion.
-In some cases it is necessary to combine the solutions to the smaller instances in
order to obtain the solution to the original instance.
int x = 145;
The following examples of recursive algorithms and their analysis shows different
techniques for doing this analysis.
Therefore
Tn = 2 lg n + 2
If n is not a power of 2 then
Tn = 2 FLOOR[lg n] + 2
Substitution method:
Tn = Tn/2 + 2 ---------------------------------------------- Rec 1
for n > 1, n a power of 2
T1 = 2
Substitute n by 2k and T2k by Ck
Tn/2 = T2k /2 = T2k-1= Ck-1
C0 = 2
Ck= Ck-1 + 2 ---------------------------------------------- Rec 2
The first few values of the new Rec 2:
C1= C1-1 + 2 = C0 + 2 = 2 + 2 = 4
C2= C2-1 + 2 = C1 + 2 = 4 + 2 = 6
C3= C3-1 + 2 = C2 + 2 = 6 + 2 = 8
C4= C4-1 + 2 = C3 + 2 = 8 + 2 = 10
The solution to the transformed recurrence is:
Ck= 2k + 2
By inverse substitution:
2k = n
k = lg n
Tn = 2 lg n + 2
tn = tn-1 + 1
The number of multiplications done at the top level is 1 plus the number of
multiplications done for the recursive calls.
The recursion stops when n = 0 and no multiplication is done in that step. Therefore
t0 = 0
t1 = t1-1 + 1 = t0 + 1 = 0 + 1 = 1
t2 = t2-1 + 1 = t1 + 1 = 1 + 1 = 2
t3 = t3-1 + 1 = t2 + 1 = 2 + 1 = 3
t4 = t4-1 + 1 = t3 + 1 = 3 + 1 = 4
The solution to the recurrence is:
tn = n
3 2 8 1
Sort & divide
merge 1 2 3 8
2 3 3 2 8 1 1 8
-Analysis of extra space usage: Merge sort is not an in-place algorithm because the
extra space required increases with the input size. Inside mergeSort()we are creating
two arrays L1 and L2. This extra space is two times the space required for the original
elements because in each recursive call the sum of the size of two arrays is half the
size at the previous level. The following algorithm does much of the manipulation on
the original array and reduces the temporary array size to n.
Best-case: If the partitioning produce two subarrays of size n/2 and [n/2]-1 then the
recurrence is
T(n) = 2 T(n/2) + O(n)
A complete binary tree is a binary tree that satisfies the following conditions:
An essentially complete binary tree is a binary tree that satisfies the following
conditions:
-Heap sort:
-Decision trees for sorting algorithms: A binary tree can be constructed to represent
all the possible permutations of keys in a list in order to sort them. Such trees are
called decision trees because at each node a decision is taken as to which node to visit
next. Below is an example of a sorting algorithm that sorts three keys:
void sortthree (int n1, int n1, int n3)
{
if (n1 < n2)
{
if (n2 < n3)
cout << n1, n2, n3;
else if (n1 < n3)
else
cout << n3, n1, n2;
}
else if (n2 < n3)
{
if (n1 < n3)
cout << n2, n1, n3;
else
cout << n2, n3, n1;
}
else
cout << n3, n2, n1;
}
n1<n2
Yes No
n2<n3 n2<n3
Yes No Yes No
Yes No Yes No
Week-12.13
-Dynamic Programming: It is similar to divide-and-conquer approach i-e a big
problem is divided into smaller problems. But the difference is that in case of
dynamic programming the sub problems are overlapping while in divide-and-conquer
the sub problems are solved separately.
In dynamic programming once a sub-problem is solved its results are saved which can
later on be used to solve other sub problems instead of recomputing them. To use
dynamic programming, the problem must observe the principle of optimality, that
whatever the initial state is, remaining decisions must be optimal with regard the state
following from the first decision. In other word this principle is said to apply if an
optimal solution to an instance of a problem contains optimal solutions to all sub
instances.
Suppose there is an edge between every vertex in the graph then the possibilities of
paths from each vertex to every other vertex increase to factorial. A subset of all these
path is a set of paths from one vertex to another vertex that passes through all the
other vertices: (n-2)(n-3)…1 = (n-2)!
Suppose we put the shortest paths between any two vertices in a matrix D like this:
1 2 3 4 5
1 0 1 3 1 4
2 9 0 3 2 5
3 10 11 0 4 7
4 6 7 2 0 3
5 3 4 6 4 0
Problem: Compute the shortest paths from each vertex in a weighted graph to each of
the other vertices. The weights are nonnegative numbers.
Inputs: A weighted, directed graph and n, the number of vertices in the graph. The
graph is represented by a two-dimensional array W which has both its rows and
columns indexed from 1 to n, where W[i][j] is the weight on the edge from the ith
vertex to the jth vertex.
Outputs: A two-dimensional array D, which has both its rows and columns indexed
from 1 to n, where D[i] [j] is the length of a shortest path from the ith vertex to the jth
vertex.
It is O(n3)
Once we have P then we can use the following recursive function to print the shortest
path between vertex q and r:
If P is global then calling this function as path(5, 3) will print V1, V4.
If in a graph of n nodes there is an edge from every vertex to every other vertex and
we consider all possible tours, as we did above, then the second vertex on the tour can
be any of (n-1) vertices, the third vertex can be any of (n-2) and so on. The nth vertex
will be only one vertex. This means that the total number of tours is (n-1)! Because
(n-1)(n-2)…1 = (n-1)!
So any algorithm considering all the possible tours will be worse than exponential.
The length of an optimal tour = minimum(W[1][j] + D[vj][V-{v1, vj}]) for 2 =< j <= n
W[1][j] means the weight of the edge from vertex v1 to vertex vj.
D[vj][V-{v1, vj}] is the minimum of the paths that starts from vj and reaches v1
passing through all the vertices in V except v1 and vj.
This can be generalize to:
D[vi][A] = minimum(W[i][j] + D[vj][A-{vj}]) where i != 1 and vi is not in A and
whereas vj belongs to A which is a non empty set.
And D[vi][Ø] = W[i][1] (which means a path that starts from vi and reaches v1 and
without passing through any other vertex)
Table D
Ø v1 v2 v3 v4 v3, v4 v2, v4 v2, v3 v2, v3, v4
1 21
2 1 ∞ 20
3 ∞ 8 14 12
4 6 4 ∞ ∞
Table P
Ø v1 v2 v3 v4 v3, v4 v2, v4 v2, v3 v2, v3, v4
1 3
2 3 2 3
3 2 4 4
4 2 2 2
Loop invariant
Strategies of algorithms:
1. Greedy
2. Divide & conquer
3. Prune & search
4. Dynamic programming
5. Branch and bound
6. Approximation
7. Heuristics
Recommended Readings:
1. Foundation of Algorithms using C++ Pseudocode Second Edition
By: Richard E. Neapolitan, Kumarass Naimipour
2. Introduction to Algorithms
By: Thomas H. Coremen
3. Websites those contain algorithms and research papers