0% found this document useful (0 votes)
85 views245 pages

CSE703 Module1

The document discusses algorithms and algorithm analysis. It covers types of algorithms like divide and conquer, greedy methods, dynamic programming, and backtracking. It also discusses algorithm complexity analysis including asymptotic notations like Big O, Omega, and Theta notation. Key aspects like time complexity, space complexity, and analysis of loops are explained. Types of loops and their time complexities of O(1), O(n), O(n^2), and O(Logn) are outlined.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views245 pages

CSE703 Module1

The document discusses algorithms and algorithm analysis. It covers types of algorithms like divide and conquer, greedy methods, dynamic programming, and backtracking. It also discusses algorithm complexity analysis including asymptotic notations like Big O, Omega, and Theta notation. Key aspects like time complexity, space complexity, and analysis of loops are explained. Types of loops and their time complexities of O(1), O(n), O(n^2), and O(Logn) are outlined.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 245

Amity School of Engineering & Technology

CSE703
• Algorithms, Analyzing Algorithms, Complexity of Algorithms, Growth
of Functions, Performance Measurements, Asymptotic Notations.
Recurrences- substitution method, recursion tree method, master
method
• Types of algorithm: Divide and Conquer with Examples Such as
Sorting, Matrix Multiplication, Convex Hull and Searching. Greedy
Methods with Examples Such as Optimal Reliability Allocation,
Knapsack, Minimum Spanning Trees – Prim’s and Kruskal’s
Algorithms, Single Source Shortest Paths – Dijkstra’s and Bellman
Ford Algorithms. Dynamic Programming with Examples Such as
Knapsack. All Pair Shortest Paths – Warshal’s and Floyd’s
Algorithms, Resource Allocation Problem. Backtracking, Branch and
Bound with Examples Such as Travelling Salesman Problem, Graph
Coloring, n-Queen Problem, Hamiltonian Cycles and Sum of
Subsets.
Amity School of Engineering & Technology

What is Algorithm Analysis?


• How to estimate the time required for an
algorithm
• Techniques that drastically reduce the
running time of an algorithm
• A mathemactical framwork that more
rigorously describes the running time of an
algorithm

Th pp 2
urs
Amity School of Engineering & Technology

• Many criteria affect the running time of an


algorithm, including
– speed of CPU, bus and peripheral hardware
– design think time, programming time and
debugging time
– language used and coding efficiency of the
programmer
– quality of input (good, bad or average)

Th 3
Design
urs
and Analysis of Computer
Amity School of Engineering & Technology

Algorithm
Algorithm is a step-by-step procedure, which
defines a set of instructions to be executed in
a certain order to get the desired output.
Algorithms are generally created
independent of underlying languages, i.e. an
algorithm can be implemented in more than
one programming language.
Amity School of Engineering & Technology

Characteristics of an Algorithm
• Unambiguous − Algorithm should be clear and
unambiguous. Each of its steps (or phases), and their
inputs/outputs should be clear and must lead to only
one meaning.
• Input − An algorithm should have 0 or more well-
defined inputs.
• Output − An algorithm should have 1 or more well-
defined outputs, and should match the desired output.
• Finiteness − Algorithms must terminate after a finite
number of steps.
• Feasibility − Should be feasible with the available
resources.
• Independent − An algorithm should have step-by-step
directions, which should be independent of any
programming code.
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Algorithm Complexity
Suppose X is an algorithm and n is the size of
input data, the time and space used by the
algorithm X are the two main factors, which
decide the efficiency of X.
• Time Factor − Time is measured by counting
the number of key operations such as
comparisons in the sorting algorithm.
• Space Factor − Space is measured by
counting the maximum memory space
required by the algorithm.
The complexity of an algorithm f(n) gives the
running time and/or the storage space required
by the algorithm in terms of n as the size of
input data.
Amity School of Engineering & Technology

Analysis of Algorithm
Analysis of algorithm is the process of
analyzing the problem-solving capability of
the algorithm in terms of the time and size
required (the size of memory for storage
while implementation). However, the main
concern of analysis of algorithms is the
required time or performance. Generally, we
perform the following types of analysis −
Amity School of Engineering & Technology

• Worst-case − The maximum number


of steps taken on any instance of
size n.
• Best-case − The minimum number of
steps taken on any instance of size n.
• Average case − An average number
of steps taken on any instance of
size n.
Amity School of Engineering & Technology

Asymptotic Notations
• Execution time of an algorithm depends on
the instruction set, processor speed, disk
I/O speed, etc. Hence, we estimate the
efficiency of an algorithm asymptotically.
• Time function of an algorithm is
represented by T(n), where n is the input
size.
Amity School of Engineering & Technology

Different types of asymptotic notations are


used to represent the complexity of an
algorithm. Following asymptotic notations
are used to calculate the running time
complexity of an algorithm.
• O − Big Oh
• Ω − Big omega
• θ − Theta
• o − Little Oh
• ω − Little omega
Amity School of Engineering & Technology
• Big O Notation: The Big O notation
defines an upper bound of an algorithm, it
bounds a function only from above.

O(g(n)) = { f(n): there exist positive


constants c and n0 such that 0 <= f(n) <=
c*g(n) for all n >= n0}
Amity School of Engineering & Technology
• Ω Notation: Just as Big O notation provides
an asymptotic upper bound on a function, Ω
notation provides an asymptotic lower bound.

• Ω (g(n)) = {f(n): there exist positive constants


c and n0 such that 0 <= c*g(n) <= f(n) for all n
>= n0}.
Amity School of Engineering & Technology
• Θ Notation: The theta notation bounds a
functions from above and below, so it
defines exact asymptotic behavior.

Θ(g(n)) = {f(n): there exist positive constants


c1, c2 and n0 such that 0 <= c1*g(n) <= f(n)
<= c2*g(n) for all n >= n0}
Amity School of Engineering & Technology

Little ο asymptotic notation


• Big-Ο is used as a tight upper-bound on
the growth of an algorithm’s effort (this
effort is described by the function f(n)),
even though, as written, it can also be a
loose upper-bound. “Little-ο” (ο()) notation
is used to describe an upper-bound that
cannot be tight.
Amity School of Engineering & Technology
Amity School of Engineering & Technology

• Intuitively, in the o-notation, the


function f(n) becomes insignificant relative
to g(n) as n approaches infinity; that is,
Amity School of Engineering & Technology

Little ω asymptotic notation


• Let f(n) and g(n) be functions that
map positive integers to positive real
numbers. We say that f(n) is ω(g(n))
(or f(n) ω(g(n))) if for any real
constant c > 0, there exists an integer
constant n0 ≥ 1 such that
0<=c*g(n)<f(n)for every integer n ≥ n0.
Amity School of Engineering & Technology

• That is, f(n) becomes arbitrarily large


relative to g(n) as n approaches infinity.
Amity School of Engineering & Technology

Control Structures
1. Sequence logic or Sequential flow

2. Selection logic or Conditional flow

3. Iteration logic or Repetitive flow


Amity School of Engineering & Technology

Space Complexity

• S(P)= C + SP

Where , S(P) is space required by a


problem P,
C is a constant and SP is an instance
characteristic.
Amity School of Engineering & Technology

Time Complexity

• T(P)= Compile Time+ Run Time (tp)

Run time is denoted by tp

We count the number of program steps for


finding execution time.
Amity School of Engineering & Technology

The number of steps of any program depends


on the kind of statement. Different statements
have different counts. For example
1. Comments count as zero steps.
2. Assignment statements are counted as one,
as they do not call any other algorithm.
3. For iterative statements such as for, while
etc. we count the steps only for control part.
Amity School of Engineering & Technology

Analysis of Loops
1) O(1): Time complexity of a function (or
set of statements) is considered as O(1)
if it doesn’t contain loop, recursion and
call to any other non-constant time
function.
Amity School of Engineering & Technology

• A loop or recursion that runs a constant


number of times is also considered as
O(1). For example the following loop is
O(1).
// Here c is a constant
for (int i = 1; i <= c; i++)
{ // some O(1) expressions }
Amity School of Engineering & Technology

2) O(n): Time Complexity of a loop is considered


as O(n) if the loop variables is incremented /
decremented by a constant amount. For example
following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c)
{ // some O(1) expressions }
for (int i = n; i > 0; i -= c)
{ // some O(1) expressions }
Amity School of Engineering & Technology

• 3) O(np): Time complexity of nested loops


is equal to the number of times the
innermost statement is executed. For
example the following sample loops have
O(n2) time complexity.
• for (int i = 1; i <=n; i += c)
{ for (int j = 1; j <=n; j += c)
{ // some O(1) expressions } }
for (int i = n; i > 0; i -= c)
{ for (int j = n; j >0; j -= c)
{ // some O(1) expressions }}
Amity School of Engineering & Technology
• 4) O(Logn) Time Complexity of a loop is
considered as O(Logn) if the loop
variables is divided / multiplied by a
constant amount.
for (int i = 1; i <=n; i *= c)
{ // some O(1) expressions }

for (int i = n; i > 0; i /= c)


{ // some O(1) expressions }
Amity School of Engineering & Technology
• 5) O(LogLogn) Time Complexity of a loop
is considered as O(LogLogn) if the loop
variables is reduced / increased
exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c))
{ // some O(1) expressions }
//Here fun is sqrt or cuberoot or any other
constant root
for (int i = n; i > 1; i = fun(i))
{ // some O(1) expressions }
Amity School of Engineering & Technology

• How to combine time


complexities of
consecutive loops?
Amity School of Engineering & Technology

• When there are consecutive loops, we


calculate time complexity as sum of time
complexities of individual loops.
Amity School of Engineering & Technology
Example
for (int i = 1; i <=m; i += c)
{ // some O(1) expressions }

for (int i = 1; i <=n; i += c)


{ // some O(1) expressions }
Time complexity of above code is O(m) +
O(n) which is O(m+n)
If m == n, the time complexity becomes
O(2n) which is O(n).
Amity School of Engineering & Technology

• How to calculate time


complexity when there are
many if, else statements
inside loops?
Amity School of Engineering & Technology
• As we know,worst case time
complexity is the most useful among
best, average and worst. Therefore we
need to consider worst case. We
evaluate the situation when values in if-
else conditions cause maximum
number of statements to be executed.
When the code is too complex to
consider all if-else cases, we can get
an upper bound by ignoring if else and
other complex control statements.
Amity School of Engineering & Technology

What is the time complexity of following function fun()?

int fun(int n)
{
for (int i = 1; i <= n; i++)
{
for (int j = 1; j < n; j += i)
{
// Some O(1) task
}
}
}
Amity School of Engineering & Technology
Answer

• O(nlogn)
Amity School of Engineering & Technology

What is the time complexity of following


code:
int a = 0, b = 0;
for (i = 0; i < N; i++)
{
a = a + rand();
}
for (j = 0; j < M; j++)
{
b = b + rand();
}
Amity School of Engineering & Technology
Answer

• O(max(N, M)) or O(N + M)


Amity School of Engineering & Technology

What is time complexity of fun()?


int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
Amity School of Engineering & Technology
Answer

• O(n)
Amity School of Engineering & Technology
What is time complexity of fun()?

int fun(int n)
{
int count = 0;
for (int i = 0; i < n; i++)
for (int j = i; j > 0; j--)
count = count + 1;
return count;
}
Amity School of Engineering & Technology
Answer

• O( n2 )
Amity School of Engineering & Technology
What is time complexity of fun()?

void fun(int n, int arr[])


{
int i = 0, j = 0;
for(; i < n; ++i)
while(j < n && arr[i] < arr[j])
j++;
}
Amity School of Engineering & Technology
Answer

• O(n)
Amity School of Engineering & Technology
What is time complexity ()?

int a = 0, i = N;
while (i > 0)
{
a += i;
i /= 2;
}
Amity School of Engineering & Technology
Answer

• O(log N)
Amity School of Engineering & Technology
Recurrences

• Many algorithms are recursive in


nature. When we analyze them, we get a
recurrence relation for time complexity.
We get running time on an input of size n
as a function of n and the running time on
inputs of smaller sizes.
Amity School of Engineering & Technology

• A recurrence is an equation or
inequality that describes a
function in terms of its value on
smaller inputs. Recurrences are
generally used in divide-and-
conquer paradigm.
Amity School of Engineering & Technology

• Let us consider T(n) to be the running


time on a problem of size n.
• If the problem size is small enough,
say n < c where c is a constant, the
straightforward solution takes constant
time, which is written as θ(1). If the
division of the problem yields a
number of sub-problems with size n/b.
Amity School of Engineering & Technology

• To solve the problem, the required time


is a.T(n/b). If we consider the time
required for division is D(n) and the time
required for combining the results of sub-
problems is C(n), the recurrence relation
can be represented as −
Amity School of Engineering & Technology

There are mainly four ways for solving


recurrences.
1. Substitution Method
2. Iteration Method
3. Recurrence Tree Method
4. Master Method
Amity School of Engineering & Technology
Substitution Method

The Substitution Method Consists of two


main steps:
(i) Guess the Solution.
(ii)Use the mathematical induction to find the
boundary condition and shows that the
guess is correct.
Amity School of Engineering & Technology
Consider the Recurrence
Amity School of Engineering & Technology

• We guess the solution is O (n (logn)).


Thus for constant 'c'.
T (n) ≤c n logn
Put this in given Recurrence Equation.
Now, T (n) ≤2c(n/2)log(n/2) +n ≤cnlogn-
cnlog2+n =cn logn-n (clog2-1) ≤cn logn for
(c≥1)
Thus T (n) = O (n logn).
Amity School of Engineering & Technology
Iteration Methods

• Consider the Recurrence


• T (n) = 1 if n=1
= 2T (n-1) if n>1
Amity School of Engineering & Technology

• T (n) = 2T (n-1)
• = 2[2T (n-2)] = 22T (n-2)
• = 4[2T (n-3)] = 23T (n-3)
• = 8[2T (n-4)] = 24T (n-4) …………….(Eq.1)

• Repeat the procedure for i times


T (n) = 2i T (n-i)
Amity School of Engineering & Technology

• Put n-i=1 or i= n-1 in (Eq.1)


• T (n) = 2n-1 T (1) = 2n-1 .1 {T (1) =1
.....given} = 2n-1
= O(2n )
Amity School of Engineering & Technology
Consider the Recurrence

T (n) = T (n-1) +1 and T (1) = θ (1).


Amity School of Engineering & Technology

• T (n) = T (n-1) +1
• = (T (n-2) +1) +1
• = (T (n-3) +1) +1+1
• = T (n-4) +4
• = T (n-5) +1+4 = T (n-5) +5
• = T (n-k) + k
• Where k = n-1 ,T (n-k) = T (1) = θ (1)
• T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).
Amity School of Engineering & Technology
Recursion Tree Method
1. Recursion Tree Method is a pictorial
representation of an iteration method which
is in the form of a tree where at each level
nodes are expanded.
2. In general, we consider the second term
in recurrence as root.
3. It is useful when the divide & Conquer
algorithm is used.
Amity School of Engineering & Technology

4.It is sometimes difficult to come up with a


good guess. In Recursion tree, each root and
child represents the cost of a single
subproblem.
5. We sum the costs within each of the levels of
the tree to obtain a set of per-level costs and
then sum all per-level costs to determine the
total cost of all levels of the recursion.
6. A Recursion Tree is best used to generate a
good guess, which can be verified by the
Substitution Method.
Amity School of Engineering & Technology

• Consider T (n) = 2T(n/2) + n2


Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Answer

• O(n2 )
Amity School of Engineering & Technology
Consider the following
recurrence
T (n) = 4T(n/2) +n

Obtain the asymptotic bound using recursion


tree method.
Amity School of Engineering & Technology
Answer

• O(n2 )
Amity School of Engineering & Technology
Consider the following
recurrence
Amity School of Engineering & Technology
Answer

• O(n log 3/2 n) or O(n log n)


Amity School of Engineering & Technology
Consider the following
recurrence
• T(n)= 3 T(n/4) +n2
Amity School of Engineering & Technology
Answer

• O(n 2 )
Amity School of Engineering & Technology
Consider the following
recurrence
• T(n) = 2T(n/2) + n
Amity School of Engineering & Technology
Answer

• O(nlogn)
Amity School of Engineering & Technology
Consider the following
recurrence
• T(n) = T(n/5) + T(4n/5) + n
Amity School of Engineering & Technology

• O(nlog5/4n)
Amity School of Engineering & Technology
Master Method

• The Master Method is used for solving the


following types of recurrence
T(n) = a T(n/b)+ f (n) where a≥1 and b>1 are
constants and f(n) is an asymptotically
positive function.
Amity School of Engineering & Technology
In the function to the analysis of a recursive
algorithm, the constants and function take on
the following significance:

• n is the size of the problem.

• a is the number of subproblems in the


recursion.

• n/b is the size of each subproblem.


Amity School of Engineering & Technology

• f (n) is the sum of the work done


outside the recursive calls, which
includes the sum of dividing the
problem and the sum of combining
the solutions to the subproblems.
• It is not possible always bound the
function according to the requirement,
so we make three cases which will tell
us what kind of bound we can apply
on the function.
Amity School of Engineering & Technology
Amity School of Engineering & Technology

• T(n)=9 T(n/3)+n
Amity School of Engineering & Technology
Answer

Θ(n2)
Amity School of Engineering & Technology

• T(n)=T(2n/3) +1
Amity School of Engineering & Technology
Answer

• Θ(log2 n)
Amity School of Engineering & Technology

• T(n)=3T(n/4)+n log2 n
Amity School of Engineering & Technology
Answer

• Θ (n log 2 n)
Amity School of Engineering & Technology
Solve following

• T (n) = 8 T(n/2)+1000n2

• T (n) = 2 T(n/2) +10n

• T (n) = 2T(n/2)+ n2
Amity School of Engineering & Technology
Answer

• Θ (n3)

• Θ (n log n)

• Θ(n2)
Solve following using Master
Amity Method
School of Engineering & Technology
Answer
Amity School of Engineering & Technology
Amity School of Engineering & Technology

Divide and Conquer

1. Merge sort and quick sort


2. Binary search
3. Multiplication of large integers
4. Strassen’s Matrix Multiplication
5. Closest pair problem
Amity School of Engineering & Technology
Expected Outcomes

• Students should be able to


– Explain the idea and steps of divide and
conquer
– Explain the ideas of mergesort and quicksort
– Analyze the time complexity of mergesort and
quicksort algorithms in the best, worst and
average cases
Amity School of Engineering & Technology
Divide and Conquer

• A successful military strategy long before it became


an algorithm design strategy
– Coalition uses divide-conquer plan in Fallujah, Iraq.
• By Rowan Scarborough and Bill Gertz, THE WASHINGTON
TIMES
• Example: Your instructor give you a 500-question
assignment today and ask you to turn it in the
tomorrow. What should you do?
Amity School of Engineering & Technology
Small And Large Instance

• Small instance.
 Sort a list that has n <= 10 elements.
 Find the minimum of n <= 2 elements.
• Large instance.
 Sort a list that has n > 10 elements.
 Find the minimum of n > 2 elements.
Amity School of Engineering & Technology

A typical case of divide and conquer technique


Recursion In Divide And Amity School of Engineering & Technology

Conquer
• Often the smaller instances that result from the
divide step are instances of the original problem
(true for our sort and min problems). In this case,
 If the new instance is a small instance, it is solved using
the method for small instances.
 If the new instance is a large instance, it is solved using
the divide-and-conquer method recursively.
• Generally, performance is best when the smaller
instances that result from the divide step are of
approximately the same size.
Amity School of Engineering & Technology

Recursive Binary Search


Amity School of Engineering & Technology
The Binary Search Algorithm

• The binary search algorithm is naturally


recursive.
• That is, the action that we perform on
the original list is the very same as the
action that we perform on the sublists.
• As a consequence, it is very easy to
write a recursive binary search function.
• Algo Binsearch(a,i,l,x) Amity School of Engineering & Technology

• // Array a[i:l] –non decreasing order(sort)


• // Order 1<= i<= l
• If (l=i) then // small (p)
• { if (x=a[i]) then return i;
• Else return 0;}
• Else
• {reduce in smaller sub problems
• Mid=(i+l)/2
• If(x=a[mid]) then return mid;
• Else if (if x<a[mid])then
• Return Binsearch(a,I,mid-1,x)
• Else return Binsearch(a,mid+1,l,x)
• }
• }
Amity School of Engineering & Technology

-15,-6,0,7,9,23,54,82,101,112,125,131,142,151
Toal 14 elements a[1 : 14}
Low,high,mid we want to search x=151, x=- -14,x=9

Low High mid

1 14 7
8 14 11
12 14 13
14 14 14
Amity School of Engineering & Technology
• Iterative Binary search
• Algo Binsearch(a,n,x)
• {
• Low=1; high=n;
• While(low<=high)
• {
• Mid=(low + high)/2
• If (x <a[mid] then high=mid-1
• Else if (x>a[mid]) then low mid +1;
• Else return mid;
• }
• Return 0;
• }
The BinarySearch() Function
Amity School of Engineering & Technology

int BinarySearch(int a[], int value, int left, int right)


{
// See if the search has failed
if (left > right)
return –1;
// See if the value is in the first half
int middle = (left + right)/2;
if (value < a[middle])
return BinarySearch(a, value, left, middle – 1);
// See if the value is in the second half
else if (value > a[middle])
return BinarySearch(a, value, middle + 1, right);
// The value has been found
else
return middle;
}
The BinarySearch() Function
Amity School of Engineering & Technology

• The signature of the preceding function is


(int a[], int value, int left, int right)

• This means that the initial function call


would have to be
BinarySearch(a, value, 0, size – 1);

• However, that is not the standard interface


for a search function.
The BinarySearch() Function
Amity School of Engineering & Technology

• Normally, the function call would be written


as
BinarySearch(a, size, value);

• Therefore, we should write an additional


BinarySearch() function with prototype
int BinarySearch(int a[], int size, int value);

• This function will call the other one and


then report back the result.
The BinarySearch() Function
Amity School of Engineering & Technology

int BinarySearch(int a[], int size, int value)


{
return BinarySearch(a, value, 0, size – 1);
}
Amity School of Engineering & Technology
• Theorem:If n is in the range [2k-1 ,2k,] the
binary search makes ta most k element
comparisons for a successful search
and either k-1 or k comparisons for an
unsuccessful search.Or we can say
that successful search is O(log n) and
for an unsuccessful search is
Thetha(log n)
• Design binary search algo to find xin an
ordered lis. Do its worst case and
average case behaviour analysis.how
do you modify this algorithm to eliminate
the unnecessary work,if you are sure
that x is in the list?
Amity School of Engineering & Technology

Practical Fast Matrix


Multiplication
Strassen’s Matrix Multiplication
Amity School of Engineering & Technology

Basic Matrix Multiplication

Suppose we want to multiply two matrices of


size N x N: for example A x B = C.

C11 = a11b11 + a12b21


C12 = a11b12 + a12b22
C21 = a21b11 + a22b21
2x2 matrix multiplication can be
C22 = a21b12 + a22b22 accomplished in 8 multiplication.(2log28
=23)
Amity School of Engineering & Technology

Basic Matrix Multiplication


void matrix_mult (){
for (i = 1; i <= N; i++) { algorithm
for (j = 1; j <= N; j++) {
compute Ci,j;
}
}} N
Ci, j  ai,kbk, j
k1
Time analysis N N N
ThusT(N)  c  cN3  O(N3)
i1 j 1 k 1
Amity School of Engineering & Technology
“Normal” Matrix Multiplication

A11 A12  B11 B12  C11 C12 


     
A21 A22  B21 B22  C21 C22 
C11  A11B11  A12 B21
8 multiplications
O(n3) C12  A11B12  A12 B22
C21  A21B11  A22 B21
C22  A21B12  A22 B22
Amity School of Engineering & Technology
Strassen’s Algorithm

A11 A12  B11 B12  C11 C12 


     
A21 A22  B21 B22  C21 C22 

P1  A11  A22 B11  B22  P5  A11  A12 B22 C11  P1  P4  P5  P7


P2  A21  A22 B11 P6  A21  A11 B11  B12  C12  P3  P5
P3  A11 B12  B22  P7  A12  A22 B21  B22  C21  P2  P4
C22  P1  P3  P2  P6
P4  A22 B21  B11 
Amity School of Engineering & Technology

Fast Matrix Multiplication


• Strassen: 7 multiplies, 18 additions, O(n2.81)
• Strassen-Winograd: 7 multiplies, 15 additions
• Coppersmith-Winograd, O(n2.376)
– But this is not (easily) implementable
– “Previous authors in this field have exhibited their
algorithms directly, but we will have to rely on
hashing and counting arguments to show the
existence of a suitable algorithm.”
Amity School of Engineering & Technology

– For simple AB matrix 8 multiplication of n/2 * n/2 and 4


addition of n/2 * n/2 the overall computation time t(n) is
given by recurrence
– T(n)= b n<=2
– 8t(n/2) + cn2 n>2 where b and c are constants
– T(n)=O(n3)
– -----------------------------------------------------
– By strassen recurrence relation
– T(n)= b n<=2
– 7t(n/2) + cn2 n>2 where b and c are constants
– T(n)=O(n log 7 2) ~=o (n 2.81)
Amity School of Engineering & Technology

Strassens’s Matrix Multiplication

• Strassen showed that 2x2 matrix multiplication can be


accomplished in 7 multiplication and 18 additions or
subtractions. .(2log27 =22.807)

• This reduce can be done by Divide and Conquer


Approach.
Amity School of Engineering & Technology

Divide-and-Conquer
• Divide-and conquer is a general algorithm
design paradigm:
– Divide: divide the input data S in two or more disjoint
subsets S1, S2, …
– Recur: solve the subproblems recursively
– Conquer: combine the solutions for S1, S2, …, into a
solution for S
• The base case for the recursion are
subproblems of constant size
• Analysis can be done using recurrence
equations
Divide and Conquer Matrix Amity School of Engineering & Technology

Multiply
A  B = R
A0 A1 B0 B1 A0B0+A1B2 A0B1+A1B3
 =
A2 A3 B2 B3 A2B0+A3B2 A2B1+A3B3

•Divide matrices into sub-matrices: A0 , A1, A2 etc


•Use blocked matrix multiply equations
•Recursively multiply sub-matrices
Divide and Conquer Matrix
Amity School of Engineering & Technology

Multiply
A  B = R
a0  b0 = a0  b0

• Terminate recursion with a simple base case


Amity School of Engineering & Technology

Strassens’s Matrix Multiplication

P1 = (A11+ A22)(B11+B22) C11 = P1 + P4 - P5 + P7


P2 = (A21 + A22) * B11 C12 = P3 + P5
P3 = A11 * (B12 - B22) C21 = P2 + P4
P4 = A22 * (B21 - B11) C22 = P1 + P3 - P2 + P6
P5 = (A11 + A12) * B22
P6 = (A21 - A11) * (B11 + B12)
P7 = (A12 - A22) * (B21 + B22)
Amity School of Engineering & Technology

Comparison

C11 = P1 + P4 - P5 + P7
= (A11+ A22)(B11+B22) + A22 * (B21 - B11) - (A11 + A12) *
B22+
(A12 - A22) * (B21 + B22)
= A11 B11 + A11 B22 + A22 B11 + A22 B22 + A22 B21 – A22 B11
-
A11 B22 -A12 B22 + A12 B21 + A12 B22 – A22 B21 – A22 B22
= A11 B11 + A12 B21
Amity School of Engineering & Technology

Strassen Algorithm
void matmul(int *A, int *B, int *R, int n) {
if (n == 1) {
(*R) += (*A) * (*B);
} else {
matmul(A, B, R, n/4);
matmul(A, B+(n/4), R+(n/4), n/4);
matmul(A+2*(n/4), B, R+2*(n/4), n/4);
matmul(A+2*(n/4), B+(n/4), R+3*(n/4), n/4);
matmul(A+(n/4), B+2*(n/4), R, n/4);
matmul(A+(n/4), B+3*(n/4), R+(n/4), n/4);
matmul(A+3*(n/4), B+2*(n/4), R+2*(n/4), n/4); Divide matrices in
matmul(A+3*(n/4), B+3*(n/4), R+3*(n/4), n/4); sub-matrices and
} recursively
multiply sub-
matrices
Amity School of Engineering & Technology

Time Analysis
Amity School of Engineering & Technology
Conclusions
• Strassen and Strassen-Winograd are too
unstable for some matrices
• However, they are faster than naïve
multiplication for relatively small n
• Ques. Write Stresen’s algo of matrices
multiplication and prove that it dose 6 n2.81 – 6
n2 multiplication operations on matrix enries
where n is powe of 2?
Amity School of Engineering & Technology

Mergesort
Algorithm:
• Split array A[1
A[1..
..n
n] in two and make
 Sort arrays B and C
copies of each half in arrays B[1 B[1.. n/2 ]
 Merge sorted arrays B and C into array A
and C[1
C[1.. n/2 ]

123
Using Divide and Conquer: Amity School of Engineering & Technology

Mergesort
• Mergesort Strategy
(first  last)2

first last

Sort recursively Sort recursively


by Mergesort by Mergesort

Sorted Sorted

Merge

Sorted
Design and Analysis of Algorithms – Chapter 4 124
Amity School of Engineering & Technology
Merge sort(Divide and conquer)
• Array a[1 .10]
• =(310,285,179,652,351,423,861,254,450,520)
• Spliting into two sub array a[1..5] to a[6..10]
• Split a[1..3] to a[4..5]
• Split a[1..2] and a [3..3]
• (310 | 285 |179 | 652,351 | 423,861,254,450,520)
• a[1] and a[2]are compare
• 285,310 | 179 | 652,351 | 423,861,254,450,520
• 179,285,310 | 652,351 |423,861,254,450,520
Amity School of Engineering & Technology

• 179,285,310 | 351,652 |423,861,254,450,520


• (a[4],a[5] compared
• 179,285,310, 351,652 | 423,861,254,450,520
• 179,285,310, 351,652 | 423,|861,|254,|450,520
• a[6] and a[7] are merged then a[8] is merged with a[6..7]
• 179,285,310, 351,652 | 254,423,861| 450,520
• Next a[9] and a[10 then a[6..8] and a[9..10] merged
• 179,285,310, 351,652 | 254,423,861| 450,520
• 179,285,310, 351,652 | 254,423, 450,520,861
• Finally sorted
• 179,254,285,310,351,423,450,520,652,861
1,10
Amity School of Engineering & Technology

1,5 6,10

1,3 4,5 6,8 9,10

1,2 3,3 4,4 6,7 8,8 9,9 10,10


5,5

1,1
2,2 6,6 7,7
Amity School of Engineering & Technology
Recurrences
• The expression:
 c n 1

T ( n)  
2T    cn n  1
 n
  2 
is a recurrence. T(n)=O(n log n)
– Recurrence: an equation that describes a
function in terms of its value on smaller
functions
Idea of Mergesort Amity School of Engineering & Technology

 Divide: divide array A[0..n-1] in two about equal halves and


make copies of each half in arrays B and C
 Conquer:
 If number of elements in B and C is 1, directly solve it
 Sort arrays B and C recursively
 Combine: Merge sorted arrays B and C into array A
 Repeat the following until no elements remain in one of the arrays:
 compare the first elements in the remaining unprocessed portions of
the arrays B and C
 copy the smaller of the two into A, while incrementing the index
indicating the unprocessed portion of that array
 Once all elements in one of the arrays are processed, copy the
remaining unprocessed elements from the other array into A.
Amity School of Engineering & Technology
Amity School of Engineering & Technology
The Mergesort Algorithm
The Merge Algorithm
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Mergesort Examples

• 83297254
• 72164
• Would you like to play a game?
– Real people animation of mergesort
– Volunteers is needed!
– Try to answer the following questions?
• How many key comparisons needed?
• How much extra memory needed?
• Is the algorithm an in-place algorithm?
• Is the algorithm stable?
8 3 2 9 7 1 5 Amity
4 School of Engineering & Technology

8 3 2 9 7 1 5 4

8 3 2 9 71 5 4

8 3 2 9 7 1 5 4

3 8 2 9 1 7 4 5

2 3 8 9 1 4 5 7

1 2 3 4 5 7 8 9
Amity School of Engineering & Technology
Analysis of Mergesort

• Number of basic operations (key


comparisons):
– T(n) = 2T(n/2) + (n)
• Worst case and best case
– T(n) = 2T(n/2) + n-1
– T(n) = 2T(n/2) + n/2
• All cases: Θ(n log n)
• Space complexity: (n), not in-place
• Stable
Amity School of Engineering & Technology
Merge Sort
MergeSort(A, left, right) {
if (left < right) {
mid = floor((left + right) / 2);
MergeSort(A, left, mid);
MergeSort(A, mid+1, right);
Merge(A, left, mid, right);
}
}

// Merge() takes two sorted subarrays of A and


// merges them into a single sorted subarray of A
// (how long should this take?)
Amity School of Engineering & Technology
Merge Sort: Example

• Show MergeSort() running on the array

A = {10, 5, 7, 6, 1, 4, 8, 3, 2, 9};
Amity School of Engineering & Technology
Analysis of Merge Sort

Statement Effort
MergeSort(A, left, right) { T(n)
if (left < right) { (1)
mid = floor((left + right) / 2); (1)
MergeSort(A, left, mid); T(n/2)
MergeSort(A, mid+1, right); T(n/2)
Merge(A, left, mid, right); (n)
}
}

• So T(n) = (1) when n = 1, and


2T(n/2) + (n) when n > 1
• So what (more succinctly) is T(n)?
Amity School of Engineering & Technology
Recurrence Examples

 0 n0  0 n0
s ( n)   s ( n)  
c  s (n  1) n  0 n  s (n  1) n  0

n 1 
 c  c n 1
 
T ( n)   T ( n)  
2T    c n  1
 n  n
  2  aT    cn n  1
 b
Amity School of Engineering & Technology
The Divide, Conquer and Combine
Steps in Quicksort
• Divide: Partition array A[l..r] into 2 subarrays, A[l..s-1]
and A[s+1..r] such that each element of the first array is
≤A[s] and each element of the second array is ≥ A[s].
(computing the index of s is part of partition.)
– Implication: A[s] will be in its final position in the sorted array.
• Conquer: Sort the two subarrays A[l..s-1] and A[s+1..r]
by recursive calls to quicksort
• Combine: No work is needed, because A[s] is already in
its correct place after the partition is done, and the two
subarrays have been sorted.
QuicksortAmity School of Engineering & Technology

• Select a pivot w.r.t. whose value we are going to


divide the list. (typically, p = A[l])
• Rearrange the list so that all the elements in the first
s positions are smaller than or equal to the pivot and
all the elements in the remaining n-s positions are
larger
p than or equal to the pivot

A[i]p A[i]p

• Exchange the pivot with the last element in the first


sublist(i.e., ≤ sublist) – the pivot is now in its final
position
Amity School of Engineering & Technology
The Quicksort Algorithm

ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: A subarray A[l..r] of A[0..n-1],defined by its
left and right indices l and r
//Output: The subarray A[l..r] sorted in
nondecreasing order
if l < r
s  Partition (A[l..r]) // s is a split position
Quicksort(A[l..s-1])
Quicksort(A[s+1..r]
Partitioning Algorithm
Amity School of Engineering & Technology

Can ‘=’ be
 removed from ‘’
and ‘’?

Number of comparisons: n or n + 1
Amity School of Engineering & Technology
Quicksort Example

15 22 13 27 22 10 20 25

• Try to answer the following questions?


How much extra memory needed?
Is the algorithm an in-place algorithm?
Is the algorithm stable?
How many key comparisons needed?

Should introduce an extra element with enough value


for A[0, n-1], that is, A[n] = . Consider why?
Another Partitioning Algorithm
Amity School of Engineering & Technology

Algorithm Partition(A[l, r])


p = A[l ]
PivotLoc = l
for i = l+1 to r do
if A[ i ] < p then
PivotLoc = PivotLoc+ 1
if PivotLoc <> i then
Swap( A[ PivotLoc ], A[ i ] )
end if
end if
end for
Swap(A[ PivotLoc ], A[ l ] ) // move pivotinto
correct place Number of comparisons: n-1 for all
return PivotLoc
Efficiency of Quicksort Amity School of Engineering & Technology

Based on whether the partitioning is balanced.


• Best case: split in the middle — Θ( n log n)
– C(n) = 2C(n/2) + Θ(n) //2 subproblems of size
n/2 each
• Worst case: sorted array! — Θ( n2)
– C(n) = C(n-1) + Θ(n) //2 subproblems of size 0 and n-1
respectively
1  N 
A ( N )   N  1
– Is there any other    A ( i  1)  A ( N  i ) 
 N possible array with worst case comparisons?
i 1 
A(1) 0
• Average case: random arrays — Θ( n log n)
A( 0 ) 0 By the second partition algorithm
Amity School of Engineering & Technology
Improvements

• Improvements:
– better pivot selection: median-of-three
partitioning
– switch to insertion sort on small subfiles
– elimination of recursion
These combine to 20-25% improvement
Amity School of Engineering & Technology
Analyzing Quicksort

• In the worst case:


T(1) = (1)
T(n) = T(n - 1) + (n)
• Works out to
T(n) = (n2)
Amity School of Engineering & Technology
Analyzing Quicksort

• In the best case:


T(n) = 2T(n/2) + (n)
• What does this work out to?
T(n) = (n lg n)
Amity School of Engineering & Technology
Improving Quicksort

• The real liability of quicksort is that it runs


in O(n2) on already-sorted input
• Book discusses two solutions:
– Randomize the input array, OR
– Pick a random pivot element
• How will these solve the problem?
– By insuring that no particular input can be
chosen to make quicksort run in O(n2) time
Amity School of Engineering & Technology

2 Knapsack Problems
1. 0-1 Knapsack Problem:
A thief robbing a store finds n items.
ith item: worth vi dollars
wi pounds
W, wi, vi are integers.
He can carry at most W pounds.

Which items
should I take?
2 Knapsack Problems Amity School of Engineering & Technology

2. Fractional Knapsack Problem:


A thief robbing a store finds n items.
ith item: worth vi dollars
wi pounds
W, wi, vi are integers.
He can carry at most W pounds.
He can take fractions of items.
2 Knapsack Problems Amity School of Engineering & Technology
Dynamic Programming Solution
Both problems exhibit the optimal-substructure property:

Consider the most


valuable load that
weighs at most W
pounds.

If jth item is the remaining load must be


removed the most valuable load
from his load, weighting at most W-w that
j
he can take from the n-1
original items excluding j.

=> Can be solved by dynamic programming


2 Knapsack Problems Amity School of Engineering & Technology

Dynamic Programming Solution

Example: 0-1 Knapsack Problem


Suppose there are n=100 ingots:
30 Gold ingots: each $10000, 8 pounds (most expensive)
20 Silver ingots: each $2000, 3 pound per piece
50 Copper ingots: each $500, 5 pound per piece

Then, the most valuable load for to fill W pounds


= The most valuable way among the followings:

(1) take 1 gold ingot + the most valuable way to fill W-8 pounds from 29
gold ingots, 20 silver ingots and 50 copper ingots

(2) take 1 silver ingot + the most valuable way to fill W-3 pounds from 30
gold ingots, 19 silver ingots and 50 copper ingots

(3) take 1 copper ingot + the most valuable way to fill W-5 pounds from 30
gold ingots, 20 silver ingots and 49 copper ingots
2 Knapsack Problems Amity School of Engineering & Technology

Dynamic Programming Solution

Example: Fractional Knapsack Problem

Suppose there are totally n = 100 pounds of metal dust:


30 pounds Gold dust: each pound $10000 (most expensive)
20 pounds Silver dust: each pound $2000
50 pounds Copper dust: each pound $500

Then, the most valuable way to fill a capacity of W pounds


= The most valuable way among the followings:

(1) take 1 pound of gold + the most valuable way to fill W-1 pounds from 29
pounds of gold, 20 pounds of silver, 50 pounds of copper

(2) take 1 pound of silver + the most valuable way to fill W-1 pounds from 30
pounds of gold, 19 pounds of silver, 50 pounds of copper

(3) take 1 pound copper + the most valuable way to fill W-1 pounds from 30
pounds of gold, 20 pounds of silver, 49 pounds of copper
2 Knapsack Problems Amity School of Engineering & Technology

By Greedy Strategy
Both problems are similar. But Fractional Knapsack Problem
can be solved in a greedy strategy.

Step 1. Compute the value per pound for each item


Eg. gold dust: $10000 per pound (most expensive)
Silver dust: $2000 per pound
Copper dust: $500 per pound

Step 2. Take as much as possible of the most expensive


(ie. Gold dust)

Step 3. If the supply of that item is exhausted (ie. no more


gold) and he can still carry more, he takes as
much as possible of the item that is next most
expensive and so forth until he can’t carry any
more.
Knapsack Problems Amity School of Engineering & Technology

By Greedy Strategy

We can solve the Fractional Knapsack


Problem by a greedy algorithm:
Always makes the choice that looks
best at the moment.
ie. A locally optimal Choice

To see why we can’t solve


0-1 Knapsack Problem by
greedy strategy, read Chp
16.2.
Amity School of Engineering & Technology

Greedy Algorithms
2 techniques for solving optimization problems:
1. Dynamic Programming
2. Greedy Algorithms (“Greedy Strategy”)

For the optimization problems:


Greedy Approach can
solve these problems:
Dynamic Programming can
solve these problems:

For some optimization problems,


Dynamic Programming is “overkill”
Greedy Strategy is simpler and more efficient.
Amity School of Engineering & Technology

Greedy Algorithm Design


Steps of Greedy Algorithm Design:
1. Formulate the optimization problem in the Optimal
Substructure
form: we make a choice and we are left Property
with one subproblem to solve.
2. Show that the greedy choice can lead to Greedy-
an optimal solution, so that the greedy Choice
choice is always safe. Property

3. Demonstrate that
an optimal solution to original problem = A good clue that
greedy choice + an optimal solution to the that a greedy
strategy will solve
subproblem the problem.
Amity School of Engineering & Technology

Greedy Algorithm Design


Comparison:

Dynamic Programming Greedy Algorithms

At each step, the choice is At each step, we quickly make a


determined based on choice that currently looks best.
solutions of subproblems. --A local optimal (greedy) choice.

Sub-problems are solved first. Greedy choice can be made first


before solving further sub-problems.

Bottom-up approach Top-down approach

Can be slower, more complex Usually faster, simpler


Huffman Codes Amity School of Engineering & Technology

Huffman Codes
• For compressing data (sequence of characters)
• Widely used
• Very efficient (saving 20-90%)
• Use a table to keep frequencies of occurrence of
characters.
• Output binary string.

“Today’s weather “001 0110 0 0 100


is nice” 1000 1110”
Huffman Codes Amity School of Engineering & Technology

Example: Frequency Fixed-length Variable-length


codeword codeword
‘a’ 45000 000 0
‘b’ 13000 001 101
A file of 100,000 characters. ‘c’ 12000 010 100
Containing only ‘a’ to ‘e’ ‘d’ 16000 011 111
‘e’ 9000 100 1101
‘f’ 5000 101 1100

eg. “abc” = “000001010” eg. “abc” = “0101100”

1*45000 + 3*13000 + 3*12000 +


300,000 bits 3*16000 + 4*9000 + 4*5000
= 224,000 bits
Huffman Codes Amity School of Engineering & Technology
A file of 100,000
characters.
The coding schemes can be represented by trees:
FrequencyFixed-length FrequencyVariable-length
(in thousands)codeword (in thousands)codeword
‘a’ 45 000 ‘a’ 45 0
‘b’ 13 001 ‘b’ 13 101
‘c’ 12 010 ‘c’ 12 100
‘d’ 16 011 ‘d’ 16 111
‘e’ 9 100 ‘e’ 9 1101
‘f’ 5 101 ‘f’ 5 1100

Not a full 0 100 100 A full binary tree


1 0 1 every nonleaf node
binary tree 86 has 2 children
14 a:45 55
0 1 0 0 1
58 28 14 25 30
0 1 0 1 0 1 0 1 0 1
a:45 b:13 c:12 d:16 e:9 f:5 b:13 c:12 14 d:16
0 1
e:9 f:5
Huffman Codes Amity School of Engineering & Technology

Frequency Codeword To find an optimal code for a file:


‘a’ 45000 0 1. The coding must be unambiguous.
‘b’ 13000 101 Consider codes in which no codeword is also a prefix of
‘c’ 12000 100 other codeword. => Prefix Codes
‘d’ 16000 111
Prefix Codes are unambiguous.
‘e’ 9000 1101
Once the codewords are decided, it is easy to compress
‘f’ 5000 1100
(encode) and decompress (decode).
2. File size must be smallest.
100
0 1 => Can be represented by a full binary tree.
a:45 55 => Usually less frequent characters are at bottom
0 1 Let C be the alphabet (eg. C={‘a’,’b’,’c’,’d’,’e’,’f’})
25 30 For each character c, no. of bits to encode all c’s
0 1 0 1 occurrences = freqc*depthc
b:13 c:12 14 d:16 File size B(T) = cCfreqc*depthc
0 1
e:9 f:5
Eg. “abc” is coded as “0101100”
Huffman Codes Amity School of Engineering & Technology

How do we find the optimal Huffman code (1952) was invented to solve it.
prefix code? A Greedy Approach.

Q: A min-priority queue f:5 e:9 c:12 b:13 d:16 a:45

c:12 b:13 14 d:16 a:45 14 d:16 25 a:45


f:5 e:9 f:5 e:9 c:12 b:13

25 30 a:45 a:45 55 100

25 30 a:45 55
c:12 b:13 14 d:16
25 30
f:5 e:9 c:12 b:13 14 d:16
b:13 c:12 14 d:16
f:5 e:9
e:9 f:5
Huffman Codes Amity School of Engineering & Technology

Q: A min-priority queue f:5 e:9 c:12 b:13 d:16 a:45

c:12 b:13 14 d:16 a:45 14 d:16 25 a:45


f:5 e:9 f:5 e:9 c:12 b:13

….

HUFFMAN(C) If Q is implemented as a binary


1 Build Q from C min-heap,
2 For i = 1 to |C|-1 “Build Q from C” is O(n)
3 Allocate a new node z “EXTRACT_MIN(Q)” is O(lg n)
4 z.left = x = EXTRACT_MIN(Q)
“Insert z into Q” is O(lg n)
5 z.right = y = EXTRACT_MIN(Q)
Huffman(C) is O(n lg n)
6 z.freq = x.freq + y.freq
7 Insert z into Q in correct position.
8 Return EXTRACT_MIN(Q)
How is it “greedy”?
Amity School of Engineering & Technology

Greedy Algorithms
Summary
Casual Introduction: Two Knapsack Problems
An Activity-Selection Problem
Greedy Algorithm Design
Steps of Greedy Algorithm Design
Optimal Substructure Property
Greedy-Choice Property
Comparison with Dynamic Programming
Huffman Codes
Amity School of Engineering & Technology

Text Compression (§ 11.4)


• Given a string X, efficiently encode X into a
smaller string Y
– Saves memory and/or bandwidth
• A good approach: Huffman encoding
– Compute frequency f(c) for each character c.
– Encode high-frequency characters with short code
words
– No code word is a prefix for another code
– Use an optimal encoding tree to determine the code
words

Greedy Method and Compression 168


Encoding Tree Example Amity School of Engineering & Technology

• A code is a mapping of each character of an alphabet to a binary code-


word
• A prefix code is a binary code such that no code-word is the prefix of
another code-word
• An encoding tree represents a prefix code
– Each external node stores a character
– The code word of a character is given by the path from the root to the
external node storing the character (0 for a left child and 1 for a right
child)

00 010 011 10 11
a b c d e
a d e

Greedy Method and Compression


b c 169
Encoding Tree Optimization Amity School of Engineering & Technology

• Given a text string X, we want to find a prefix code for the characters of
X that yields a small encoding for X
– Frequent characters should have long code-words
– Rare characters should have short code-words
• Example
– X = abracadabra
– T1 encodes X into 29 bits
– T2 encodes X into 24 bits

T1 T2

c d b a b r

a r c d
Greedy Method and Compression 170
Huffman’s Algorithm Amity School of Engineering & Technology

Algorithm HuffmanEncoding(X)
• Given a string X, Input string X of size n
Huffman’s algorithm Output optimal encoding trie for X
construct a prefix code
the minimizes the size C  distinctCharacters(X)
of the encoding of X computeFrequencies(C, X)
• It runs in time Q  new empty heap
O(nd log d), where n for all c  C
is the size of X and d is T  new single-node tree storing
the number of distinct c
characters of X Q.insert(getFrequency(c), T)
• A heap-based priority while Q.size() > 1
queue is used as an
f1  Q.minKey()
auxiliary structure
T1  Q.removeMin()
f2  Q.minKey()
T2  Q.removeMin()
T  join(T1, T2)
Q.insert(f + f2, T)
Greedy Method and Compression 1 171
return Q.removeMin()
Example Amity School of Engineering & Technology

11

X = abracadabra a 6

Frequencies 2 4

a b c d r c d b r
5 2 1 1 2
6

2 4
a b c d r
5 2 1 1 2 a c d b r
5

2 2 4

a b c d r a c d b r
5 2 2 5
Greedy Method and Compression 172
Amity School of Engineering & Technology

Extended Huffman Tree Example

Greedy Method and Compression 173


The Fractional Knapsack Amity School of Engineering & Technology

Problem (not in book)


• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
• If we are allowed to take fractional amounts, then this is
the fractional knapsack problem.
– In this case, we let xi denote the amount we take of item i

– Objective: maximize
b (x / w )
iS
i i i

– Constraint:
x i W
iS Method and Compression
Greedy 174
Amity School of Engineering & Technology

Example
• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”

Solution:
• 1 ml of 5
Items: • 2 ml of 3
1 2 3 4 5
• 6 ml of 4
Weight: • 1 ml of 2
4 ml 8 ml 2 ml 6 ml 1 ml
Benefit: $12 $32 $40 $30 $50 10 ml
Value: 3 4 20 5 50
($ per ml) Greedy Method and Compression 175
The Fractional Knapsack Amity School of Engineering & Technology

Algorithm
• Greedy choice: Keep taking
item with highest value Algorithm fractionalKnapsack(S, W)
(benefit to weight ratio) Input: set S of items w/ benefit bi
– Since  bi ( xi / wi )   (bi / wi ) xi and weight wi; max. weight
iS iS W Output: amount xi of each
– Run time: O(n log n). Why?
item i to maximize
• Correctness: Suppose there benefit w/ weight at most
is a better solution W
for each item i in S
– there is an item i with higher value
than a chosen item j, but xi<wi, xi  0
xj>0 and vi<vj vi  bi / wi
– If we substitute some i with j, we {value}
get a better solution w
– How much of i: min{wi-xi, xj} 0
{total weight}
– Thus, there is no better solution
than the greedy one while w < W
remove item i w/ highest vi
Greedy Method and Compression 176
xi  min{wi , W - w}
w  w + min{wi , W - w}
Task Scheduling (not Amity School of Engineering & Technology

in book)
• Given: a set T of n tasks, each having:
– A start time, si
– A finish time, fi (where si < fi)
• Goal: Perform all the tasks using a minimum number of
“machines.”

Machine 3
Machine 2
Machine 1

1 2 3 4 5 6 7 8 9

Greedy Method and Compression 177


Task Scheduling Amity School of Engineering & Technology

Algorithm
• Greedy choice: consider tasks by
their start time and use as few
machines as possible with this
order. Algorithm taskSchedule(T)
– Run time: O(n log n). Why? Input: set T of tasks w/ start time si
and finish time fi
• Correctness: Suppose there is a
better schedule. Output: non-conflicting schedule
with minimum number of
– We can use k-1 machines
machines
– The algorithm uses k
m0
– Let i be first task scheduled on {no. of machines}
machine k
while T is not empty
– Machine i must conflict with k-1
remove task i w/ smallest si
other tasks
if there’s a machine j for i
– But that means there is no non-
then
conflicting schedule using k-1
machines schedule i on
machine j
Greedy Method and Compression else 178
mm+1
Example Amity School of Engineering & Technology

• Given: a set T of n tasks, each having:


– A start time, si
– A finish time, fi (where si < fi)
– [1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start)
• Goal: Perform all tasks on min. number of machines

Machine 3
Machine 2
Machine 1

1 2 3 4 5 6 7 8 9

Greedy Method and Compression 179


The Fractional Knapsack Amity School of Engineering & Technology

Algorithm
• Greedy choice: Keep taking
item with highest value Algorithm fractionalKnapsack(S, W)
(benefit to weight ratio) Input: set S of items w/ benefit bi
– Since  bi ( xi / wi )   (bi / wi ) xi and weight wi; max. weight
iS iS W Output: amount xi of each
– Run time: O(n log n). Why?
item i to maximize
• Correctness: Suppose there benefit w/ weight at most
is a better solution W
for each item i in S
– there is an item i with higher value
than a chosen item j, but xi<wi, xi  0
xj>0 and vi<vj vi  bi / wi
– If we substitute some i with j, we {value}
get a better solution w
– How much of i: min{wi-xi, xj} 0
{total weight}
– Thus, there is no better solution
than the greedy one while w < W
remove item i w/ highest vi
Greedy Method and Compression 180
xi  min{wi , W - w}
w  w + min{wi , W - w}
Amity School of Engineering & Technology

Minimum-Cost Spanning Tree

• weighted connected undirected graph


• spanning tree
• cost of spanning tree is sum of edge
costs
• find spanning tree that has minimum
cost
Amity School of Engineering & Technology
Example

8 10 14
1 3 5 7
7 4 12 6 3
2
2 4 6 8
9

• Network has 10 edges.


• Spanning tree has only n - 1 = 7 edges.
• Need to either select 7 edges or discard
3.
Amity School of Engineering & Technology
Edge Selection Greedy

Strategies
Start with an n-vertex 0-edge forest.
Consider edges in ascending order of
cost. Select edge if it does not form a
cycle together with already selected
edges.
 Kruskal’s method.
• Start with a 1-vertex tree and grow it
into an n-vertex tree by repeatedly
adding a vertex and an edge. When
there is a choice, add a least cost edge.
 Prim’s method.
Amity School of Engineering & Technology
Edge Selection Greedy

Strategies
Start with an n-vertex forest. Each
component/tree selects a least cost
edge to connect to another
component/tree. Eliminate duplicate
selections and possible cycles. Repeat
until only 1 component/tree is left.
 Sollin’s method.
Amity School of Engineering & Technology
Edge Rejection Greedy

Strategies
Start with the connected graph.
Repeatedly find a cycle and eliminate
the highest cost edge on this cycle.
Stop when no cycles remain.
• Consider edges in descending order of
cost. Eliminate an edge provided this
leaves behind a connected graph.
Amity School of Engineering & Technology
Kruskal’s Method
8 1 14
1 3 5 7 1 3 5 7
0
7 4 12 6 3
2
2 4 6 8 2 4 6 8
9

• Start with a forest that has no edges.


• Consider edges in ascending order of cost.
• Edge (1,2) is considered first and added to the forest.
Amity School of Engineering & Technology
Kruskal’s Method
8 1 14
1 3 5 7 1 3 5 7
0 7
7 12 6 3 4 6 3
2 4 2
2 4 6 8 2 4 6 8
9

• Edge (7,8) is considered next and


• added.
Edge (3,4) is considered next and added.

• Edge (5,6) is considered next and added.

• Edge (2,3) is considered next and added.

• Edge (1,3) is considered next and rejected because it creates a cycle.


Amity School of Engineering & Technology
Kruskal’s Method
8 1 14 1 14
1 3 5 7 1 3 0 5 7
0 7
7 12 6 3 4 6 3
2 4 2
2 4 6 8 2 4 6 8
9

• Edge (2,4) is considered next and


rejected because it creates a cycle.
• Edge (3,5) is considered next and added.

• Edge (3,6) is considered next and rejected.

• Edge (5,7) is considered next and added.


Amity School of Engineering & Technology
Kruskal’s Method
8 1 14 1 14
1 3 5 7 1 3 0 5 7
0 7
7 12 6 3 4 6 3
2 4 2
2 4 6 8 2 4 6 8
9

• n - 1 edges have been selected and no


cycle formed.
• So we must have a spanning tree.
• Cost is 46.
• Min-cost spanning tree is unique when
all edge costs are different.
Amity School of Engineering & Technology
Prim’s Method
8 1 14 1 14
1 3 5 7 1 3 0 5 7
0 7
7 12 6 3 4 6 3
2 4 2
2 4 6 8 2 4 6 8
9

• Start with any single vertex tree.


• Get a 2-vertex tree by adding a cheapest edge.

• Get a 3-vertex tree by adding a cheapest edge.

• Grow the tree one edge at a time until the tree has n - 1 edges (and hence
has all n vertices).
Amity School of Engineering & Technology
Sollin’s Method
8 1 14
1 3 5 7 1 3 5 7
0
7 12 6 3 4 6 3
2 4 2
2 4 6 8 2 4 6 8
9

• Start with a forest that has no edges.

• Each component selects a least cost edge with which to connect to


another component.
• Duplicate selections are eliminated.
• Cycles are possible when the graph has some edges that have the
same cost.
Amity School of Engineering & Technology
Sollin’s Method
8 1 14 1 14
1 3 5 7 1 3 0 5 7
0 7
7 12 6 3 4 6 3
2 4 2
2 4 6 8 2 4 6 8
9

• Each component that remains selects a least cost edge with which to
connect to another component.
• Beware of duplicate selections and cycles.
Greedy Minimum-Cost Spanning Tree Methods
Amity School of Engineering & Technology

• Can prove that all result in a minimum-cost


spanning tree.
• Prim’s method is fastest.
 O(n2) using an implementation similar to that
of Dijkstra’s shortest-path algorithm.
 O(e + n log n) using a Fibonacci heap.
• Kruskal’s uses union-find trees to run in
O(n + e log e) time.
Pseudocode For Kruskal’s Method
Amity School of Engineering & Technology

Start with an empty set T of edges.


while (E is not empty && |T| != n-1)
{
Let (u,v) be a least-cost edge in E.
E = E - {(u,v)}. // delete edge from E
if ((u,v) does not create a cycle in T)
Add edge (u,v) to T.
}
if (| T | == n-1) T is a min-cost spanning tree.
else Network has no spanning tree.
Amity School of Engineering & Technology

Backtracking
General Concepts Amity School of Engineering & Technology

• Algorithm strategy
– Approach to solving a problem
– May combine several approaches
• Algorithm structure
– Iterative  execute action in loop
– Recursive  reapply action to subproblem(s)
• Problem type
– Satisfying  find any satisfactory solution
– Optimization  find best solutions (vs. cost metric)
Amity School of Engineering & Technology
A short list of categories

• Many Algorithm types are to be


considered:
– Simple recursive algorithms
– Backtracking algorithms
– Divide and conquer algorithms
– Dynamic programming algorithms
– Greedy algorithms
– Branch and bound algorithms
– Brute force algorithms
– Randomized algorithms
Amity School of Engineering & Technology
Backtracking
• Suppose you have to make a series of
decisions, among various choices, where
– You don’t have enough information to know what
to choose
– Each decision leads to a new set of choices
– Some sequence of choices (possibly more than
one) may be a solution to your problem
• Backtracking is a methodical way of trying out
various sequences of decisions, until you find
one that “works”
Backtracking Algorithm Amity School of Engineering & Technology

• Based on depth-first recursive search


• Approach
1. Tests whether solution has been found
2. If found solution, return it
3. Else for each choice that can be made
a) Make that choice
b) Recur
c) If recursion returns a solution, return it
4. If no choices remain, return failure
• Some times called “search tree”
Amity School of Engineering & Technology
Backtracking Algorithm –
Example
• Find path through maze
– Start at beginning of maze
– If at exit, return true
– Else for each step from current location
• Recursively find path
• Return with first successful step
• Return false if all steps fail
Amity School of Engineering & Technology
Backtracking Algorithm –
Example
• Color a map with no more than four colors
– If all countries have been colored return
success
– Else for each color c of four colors and
country n
• If country n is not adjacent to a country that has
been colored c
– Color country n with color c
– Recursively color country n+1
– If successful, return success
– Return failure
Backtracking Amity School of Engineering & Technology

Start
Success!

Success!

Failure
Problem space consists of states (nodes) and actions
(paths that lead to new states). When in a node can
can only see paths to connected nodes

If a node only leads to failure go back to its "parent"


node. Try other alternatives. If these all lead to failure
then more backtracking may be necessary.
Recursive Backtracking Amity School of Engineering & Technology

Pseudo code for recursive backtracking


algorithms

If at a solution, return success


for( every possible choice from current
state / node)
Make that choice and take one step along path
Use recursion to solve the problem for the new node /
state
If the recursive call succeeds, report the success to
the next high level
Back out of the current choice to restore the state at
the beginning of the loop.
Report failure
Backtracking Amity School of Engineering & Technology

• Construct the state space tree:


– Root represents an initial state
– Nodes reflect specific choices made for a
solution’s components.
• Promising and nonpromising nodes
• leaves

• Explore the state space tree using depth-first search

• “Prune” non-promising nodes


– dfs stops exploring subtree rooted at nodes
leading to no solutions and...
– “backtracks” to its parent node
Amity School of Engineering & Technology

Example: The n-Queen problem

• Place n queens on an n by n chess


board so that no two of them are on
the same row, column, or diagonal
State Space Tree of the Four-queens
Amity School of Engineering & Technology

Problem
Amity School of Engineering & Technology
The backtracking algorithm

• Backtracking is really quite simple--we


“explore” each node, as follows:
• To “explore” node N:
1. If N is a goal node, return “success”
2. If N is a leaf node, return “failure”
3. For each child C of N,
3.1. Explore C
3.1.1. If C was successful, return “success”
4. Return “failure”
Amity School of Engineering & Technology

The m-Coloring Problem and Hamiltonian Problem

• 2-color
• 3-color
a
• Hamiltonian Circuit
(use alphabet order to
break the ties) c d

b e
Coloring a map Amity School of Engineering & Technology

• You wish to color a map with


not more than four colors
– red, yellow, green, blue
• Adjacent countries must be in
different colors
• You don’t have enough information to choose
colors
• Each choice leads to another set of choices
• One or more sequences of choices may (or
may not) lead to a solution
• Many coloring problems can be solved with
backtracking
Other Backtracking Problems
Amity School of Engineering & Technology

• 8 Queens *
• Knight's Tour
• Knapsack problem / Exhaustive Search
– Filling a knapsack. Given a choice of items
with various weights and a limited carrying
capacity find the optimal load out. 50 lb.
knapsack. items are 1 40 lb, 1 32 lb. 2 22
lbs, 1 15 lb, 1 5 lb. A greedy algorithm
would choose the 40 lb item first. Then the
5 lb. Load out = 45lb. Exhaustive search
22 + 22 + 5 = 49.
Amity School of Engineering & Technology
Graph Coloring

• Graph coloring is the problem of coloring


each vertex in a graph such that no two
adjacent vertices are the same color
• Some direct examples:
– Map coloring
– Register assignment
Amity School of Engineering & Technology
Graph Coloring

• The same issues apply as in N-Queens


– We don’t want to simply pick all subsets
• Way too many
– We want to prune the state-space tree as soon as we
find something that won’t work
• This implies that we need a sequence of vertices to color
• As we color the next vertex we need to make sure it doesn’t
conflict with any of its previously colored neighbors
– We may need to backtrack
Amity School of Engineering & Technology
Graph Coloring

A
• As an example:
– The vertices are B F
enumerated in order A-F
– The colors are given in
C E
order: R, G, B

D
Amity School of Engineering & Technology
Depth-First Search

• One can view backtracking as a Depth-First


Search through the State-Space
– We start our search at the root
– We end our search when we find a valid leaf node
• A Depth-First Search expands the deepest
node next
– A child children will be expanded before the child’s
siblings are expanded
• This type of search get you to the leaf nodes as
fast as you can
• How do you implement a Depth-First Search?
Amity School of Engineering & Technology
Backtracking Framework
• We will try and build a generic backtracking framework
that works with problems States
– All it should need to work with is States
• State should be generic and not married to any specific
problem
• States should be able to:
– Tell us if it has more children
– Produce the next child
– Tell us if it is solved
– Tell us if it is feasible
• (In class design of the State, NQueens problem specific,
and Backtracking framework classes)
Amity School of Engineering & Technology
Depth vs. Breadth First Search

• A Breadth first search expands all the


children at a particular level before
advancing to the next level
– How can a breadth first search be
implemented?
– Which is better: a depth or breadth first
search?
• Worst case Big-O?
Amity School of Engineering & Technology

Branch and Bound


Amity School of Engineering & Technology
Bounding
• A bound on a node is a guarantee that any
solution obtained from expanding the node will
be:
– Greater than some number (lower bound)
– Or less than some number (upper bound)
• If we are looking for a minimal optimal, as we
are in weighted graph coloring, then we need a
lower bound
– For example, if the best solution we have found so far
has a cost of 12 and the lower bound on a node is 15
then there is no point in expanding the node
• The node cannot lead to anything better than a 15
Amity School of Engineering & Technology
Bounding

• We can compute a lower bound for weighted


graph color in the following way:
– The actual cost of getting to the node
– Plus a bound on the future cost
• Min weight color * number of nodes still to color
– That is, the future cost cannot be any better than this

• (Interactively draw state space graph on board


complete with bounds as given above)
Amity School of Engineering & Technology
Bounding
• Recall that we could either perform a depth-first
or a breadth-first search
– Without bounding, it didn’t matter which one we used
because we had to expand the entire tree to find the
optimal solution
– Does it matter with bounding?
• Hint: think about when you can prune via bounding

• (Interactively draw the breadth-first bounding


case on the board and compare it with depth-
first)
Amity School of Engineering & Technology
Bounding

• We prune (via bounding) when:


(currentBestSolutionCost <= nodeBound)
• This tells us that we get more pruning if:
– The currentBestSolution is low
– And the nodeBound is high
• So we want to find a low solution quickly and we
want the highest possible lower bound
– One has to factor in the extra computation cost of
computing higher lower bounds vs. the expected
pruning savings
Amity School of Engineering & Technology
Best-first Search

• Depth-first search found a solution quickly


• Breadth-first search found all solutions at
about the same time (but slowly)
• Best-first search finds a good solution
fairly quickly
– Not as quickly as depth-first
– But the solution is often better which will
increase the pruning via bounding
Amity School of Engineering & Technology
Best-first Search

• Best-first search expands the node with


the best bounds next
• (Interactively show a best-first state-space
tree on weighted graph coloring)
• How would you implement a best-first
search?
– Depth-first is a stack
– Breadth-first is a queue
– Best-first is a ???
Amity School of Engineering & Technology
Traveling Salesperson Problem
• This is a classic CS problem
• Given a graph (cities), and weights on the edges
(distances) find a minimum weight tour of the
cities
– Start in a particular city
– Visit all other cities (exactly once each)
– Return to the starting city
• Cannot be done by brute-force as this is worst-
case exponential or worse running time
– So we will look to backtracking with pruning to make it
run in a reasonable amount of time in most cases
Amity School of Engineering & Technology
Traveling Salesperson Problem

• We will build our state space by:


– Having our children be all the potential cities
we can go to next
– Having the depth of the tree be equal to the
number of cities in the graph
• we need to visit each city exactly once
• So given a fully connected set of 5 nodes
we have the following state space
– only partially completed
Amity School of Engineering & Technology
Traveling Salesperson Problem

A B C D E

A B C D E

A B C D E
Amity School of Engineering & Technology
Brute Force Approaches

• Techniques for finding optimal solutions to


hard problems “relatively” quickly
• Backtracking
• Branch and Bound
• Combining efficient solutions with brute
force approaches
• Other issues
Amity School of Engineering & Technology
The Knapsack Problem

• Input
– Capacity K
– n items with weights wi and values vi
• Goal
– Output a set of items S such that
• the sum of weights of items in S is at most K
• and the sum of values of items in S is maximized
Amity School of Engineering & Technology
Greedy Does Not Work

• What are possible greedy strategies?


– Highest Density First
– Highest Value First
– Lowest Weight First
• Prove these are not optimal for the 0-1
knapsack problem.
Amity School of Engineering & Technology
Branch and Bound
• The idea:
Set up a bounding function, which is used to compute a
bound (for the value of the objective function) at a node
on a state-space tree and determine if it is promising
– Promising (if the bound is better than the value of the
best solution so far): expand beyond the node.
– Nonpromising (if the bound is no better than the
value of the best solution so far): not expand beyond
the node (pruning the state-space tree).
Traveling Salesman Problem
Amity School of Engineering & Technology

Construct the state-space tree:


– A node = a vertex: a vertex in the graph.
– A node that is not a leaf represents all the tours that
start with the path stored at that node; each leaf
represents a tour (or non-promising node).
– Branch-and-bound: we need to determine a lower
bound for each node
• For example, to determine a lower bound for
node [1, 2] means to determine a lower bound on
the length of any tour that starts with edge 1—2.
– Expand each promising node, and stop when all the
promising nodes have been expanded. During this
procedure, prune all the nonpromising nodes.
• Promising node: the node’s lower bound is less
than current minimum tour length.
• Non-promising node: the node’s lower bound is
NO less than current minimum tour length.
Traveling Salesman Problem—
Amity School of Engineering & Technology

Bounding Function 1
• Because a tour must leave every vertex exactly once,
a lower bound on the length of a tour is b (lower
bound) minimum cost of leaving every vertex.
– The lower bound on the cost of leaving vertex v1 is
given by the minimum of all the nonzero entries in
row 1 of the adjacency matrix.
–…
– The lower bound on the cost of leaving vertex vn is
given by the minimum of all the nonzero entries in
row n of the adjacency matrix.
• Note: This is not to say that there is a tour with this
length. Rather, it says that there can be no shorter
tour.
• Assume that the tour starts with v1.
Traveling Salesman Problem—
Amity School of Engineering & Technology

Bounding Function 2
• Because every vertex must be entered and exited
exactly once, a lower bound on the length of a tour is
the sum of the minimum cost of entering and
leaving every vertex.
– For a given edge (u, v), think of half of its weight as
the exiting cost of u, and half of its weight as the
entering cost of v.
– The total length of a tour = the total cost of visiting(
entering and exiting) every vertex exactly once.
– The lower bound of the length of a tour = the lower
bound of the total cost of visiting (entering and
exiting ) every vertex exactly once.
• Calculation:
– for each vertex, pick top two shortest adjacent edges
(their sum divided by 2 is the lower bound of the total
cost of entering and exiting the vertex);
– add up these summations for all the vertices.
• Assume that the tour starts with vertex a and that b is
visited before c.
Traveling salesman example
Amity School 2Technology
of Engineering &
Details to fill in
Amity School of Engineering & Technology

• Important “details” in a backtracking


algorithm:
– What is a configuration (choices and
subproblems)?
– How do you select the most “promising”
configuration from F? – Ordering search
• Traditional backtracing uses LIFO (stack)–
depth-first search, one could use FIFO (queue)
– breadth-first search, or some more clever
heuristic
– How do you extend a configuration into
subproblem configurations?
– How do you recognize a dead end or a
Amity School of Engineering & Technology

Exercises- BackTracking
• Continue the backtracking search for a
solution to the four-queens problem to find
the second solution to the problem.

• A trick to use: the board is symmetric, obtain


another solution by reflections.

• Get a solution to the 5-queens problem found


by the back-tracking algorithm?
• Can you (quickly) find at least 3 other
solutions?
Amity School of Engineering & Technology
Dynamic Programming
• Steps.
View the problem solution as the result of a
sequence of decisions.
Obtain a formulation for the problem state.
Verify that the principle of optimality holds.
Set up the dynamic programming recurrence
equations.
Solve these equations for the value of the
optimal solution.
 Perform a traceback to determine the optimal
solution.
Amity School of Engineering & Technology
Dynamic Programming

• When solving the dynamic programming


recurrence recursively, be sure to avoid
the recomputation of the optimal value for
the same problem state.
• To minimize run time overheads, and
hence to reduce actual run time, dynamic
programming recurrences are almost
always solved iteratively (no recursion).
Amity School of Engineering & Technology
0/1 Knapsack Recurrence
• If wn <= y, f(n,y) = pn.
• If wn > y, f(n,y) = 0.
• When i < n
 f(i,y) = f(i+1,y) whenever y < wi.
 f(i,y) = max{f(i+1,y), f(i+1,y-wi) + pi}, y >= wi.
• Assume the weights and capacity are
integers.
• Only f(i,y)s with 1 <= i <= n and 0 <= y <= c
are of interest.
Iterative Solution Example
Amity School of Engineering & Technology

• n = 5, c = 8, w = [4,3,5,6,2], p = [9,7,10,9,3]
0 1 2 3 4 5 6 7 8
f[i][y]
5

4 i
3

y
Amity School of Engineering & Technology

Find Best Multiplication Order


• Number of ways to compute the product of q
matrices is O(4q/q1.5).

• Evaluating all ways to compute the product takes O(4q/q0.5) time.


Amity School of Engineering & Technology
An Application
• Registration of pre- and post-operative 3D
brain MRI images to determine volume of
removed tumor.
Amity School of Engineering & Technology
3D Registration
Amity School of Engineering & Technology
3D Registration

• Each image has 256 x 256 x 256 voxels.


• In each iteration of the registration algorithm,
the product of three matrices is computed at
each voxel … (12 x 3) * (3 x 3) * (3 x 1)
• Left to right computation => 12 * 3 * 3 + 12 *
3*1 = 144 multiplications per voxel per
iteration.
• 100 iterations to converge.
Amity School of Engineering & Technology
3D Registration
• Total number of multiplications is about 2.4 *
1011.
• Right to left computation => 3 * 3*1 + 12 * 3 *
1 = 45 multiplications per voxel per iteration.
• Total number of multiplications is about 7.5 *
1010.
• With 108 multiplications per second, time is
40 min vs 12.5 min.

You might also like