0% found this document useful (0 votes)
5 views141 pages

Unit III

An algorithm is a well-defined computational procedure that transforms input values into output values through a sequence of steps. It must be unambiguous, have defined inputs and outputs, and guarantee termination with correct results. Algorithms are essential in computer science for solving various problems efficiently, and their analysis involves evaluating time and space complexity.

Uploaded by

Ad Nan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views141 pages

Unit III

An algorithm is a well-defined computational procedure that transforms input values into output values through a sequence of steps. It must be unambiguous, have defined inputs and outputs, and guarantee termination with correct results. Algorithms are essential in computer science for solving various problems efficiently, and their analysis involves evaluating time and space complexity.

Uploaded by

Ad Nan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 141

Algorithm

• An Algorithm is any well defined computational procedure that


takes some value, or set of values, as input and produces some
value, or set of values as output.
• An algorithm is thus a sequence of computational steps that
transform the input into the output.
• We can also view an algorithm as a tool for solving a well-
specified computational problem.
• Al-Khwarizmi referred algorism as the rules of performing
arithmetic using Hindu–Arabic numerals and the systematic
solution of linear and quadratic equations.
• An algorithm is said to be correct if, for every input instance, it
halts with the correct output.
• Algorithms are the threads that tie together most of the subfields
of computer science.
• Calling something an algorithm means that the following
properties are all true:
– An algorithm is an unambiguous description
– An algorithm expects a defined set of inputs.
– An algorithm produces a defined set of outputs.
– An algorithm is guaranteed to terminate and produce a result, always
stopping after a finite time.
– If an algorithm imposes a requirement on its inputs (called
a precondition), that requirement must be met.
kinds of problems are solved by
algorithms
• Finding good routes on which the data will travel in internet
enabled world.
• Search engine to quickly find pages on which particular
information resides.
• We are given two ordered sequences of symbols, X =
(x1,x2,…..) and Y(y1, y,….) and we wish to find a longest
common subsequence of X and Y .
• Finding route with minimum distance from one node to
other node in a weighted graph.
• Manufacturing and other commercial enterprises often need
to allocate resources in the most beneficial way. An oil
company may wish to know where to place its wells in
order to maximize its expected profit.
Examples of Algorithm
• Simple algorithm for finding the maximum in a list
of positive nos.
– Problem: Given a list of positive numbers, return the
largest number.
– Input: A list L of positive numbers. This list must
contain at least one number.
– Output: A number n, which will be the largest number
of the list.
– Algorithm
1. Start
2. Declare max
3. Set max to 0.
4. For each number x in the list L perform following
1. compare x to max.
2. If x is larger, set max to x.
5. max is now set to the largest number in the list.
6. Stop
1: Problem Definition
• What is the task to be accomplished?
– Calculate the average of marks of given student
– Find the shortest path between two cities
• What are the time / space / speed /
performance requirements ?
2: Algorithm Design /
Specifications
• Algorithm: Finite set of instructions that, if followed,
accomplishes a particular task.
• Describe: in natural language / pseudo-code / diagrams / etc.
• Criteria to follow:
– Input: One or more quantities (externally produced)
– Output: One or more quantities
– Unambiguous: Clarity, precision of each instruction
– Finiteness: The algorithm has to stop after a finite (may be
very large) number of steps
– Effectiveness: Each instruction has to be basic enough and
feasible
• The computational complexity and efficient implementation of
the algorithm are important in computing, and this depends on
efficient logic and suitable data structures.
Examples of Algorithm(cont…)
• Does this meet the criteria for being an
algorithm?
– Is it unambiguous? Yes.
• Each step of the algorithm consists of primitive operations, and
translating each step into any language is very easy.
– Does it have defined inputs and outputs? Yes.
– Is it guaranteed to terminate? Yes.
• The list L is of finite length, so after looking at every element of
the list the algorithm will stop.
– Does it produce the correct result? Yes.
• In a formal setting you would provide careful proof of
correctness.
4,5,6. : Implementation, Testing,
Maintenance
• Implementation
– Decide on the programming language to use
• C, C++, Lisp, Java, Perl, Prolog, assembly,Python etc. , etc.
– Write clean, well documented code

• Test:
– Ensure the implementation is as per the requirements.

• Maintenance
– Integrate feedback from users, fix bugs, ensure
compatibility across different versions.
3: Algorithm Analysis

• When is the running time noticeable/important?


– web search
– database search
– real-time systems with time constraints
• Space complexity
– How much space is required
• Time complexity
– How much time does it take to run the algorithm
• Often, we deal with estimates!
Analyzing Algorithms
• Analyzing an algorithm helps in predicting the resources such
as memory, processing time that the algorithm requires.
– Most often it is computational time that we want to
measure.
• Generic one processor, RAM model is considered as
implementation technology for most of the algorithm.
• The RAM model contains instructions commonly found in
real computers:
– arithmetic (such as add, subtract, multiply, divide,
remainder, floor, ceiling)
– data movement (load, store, copy) and
– control (conditional and unconditional branch, subroutine
call and return).
– Each such instruction takes a constant amount of time
Space Complexity
• Space complexity = The amount of memory required by an
algorithm to run to completion
– [Core dumps = the most often encountered cause is “dangling
pointers”]
• Some algorithms may be more efficient if data completely loaded
into memory
– Need to look also at system limitations
• Fixed part: The size required to store certain data/variables
– - e.g. name of the data collection
• Variable part: Space needed by variables, whose size is
dependent on the size of the problem:
– - e.g. actual text
– - load 2GB of text VS. load 1MB of text
Time Complexity
• Often more important than space complexity
– space available (for computer programs!) tends to be larger and larger
– time is still a problem for all of us

• 4-5GHz processors on the market


– still …
• Algorithms running time is an important issue
Time Complexity…

• Suppose an algorithm includes conditional statement that may


execute or not:  variable running time
• Typically algorithms are measured by their worst case
Time Complexity…
• The running time of an algorithm varies with the inputs, and
typically grows with the size of the inputs.
• To evaluate an algorithm or to compare two algorithms, we focus on
their relative rates of growth with respect to the increase of the input
size.
• The average running time is difficult to determine.
• We focus on the worst case running time
– Easier to analyze
– Crucial to applications such as robotics, games and others.
Expressing Algorithms
• An algorithm may be expressed in a number of ways,
including:
– natural language: usually verbose and ambiguous
– flow charts: avoid most issues of ambiguity; difficult to
modify without specialized tools; largely standardized.
– pseudo-code: also avoids most issues of ambiguity; vaguely
resembles common elements of programming languages;
no particular agreement on syntax
– programming language: tend to require expressing low-
level details.
Problems vs Algorithms vs Programs
• For each problem or class of problems,
there may be many different algorithms.
• For each algorithm, there may be many
different implementations (programs).
Factors that determine
running time of a program
• Problem size: n
• Basic algorithm / actual processing
• memory access speed
• CPU/processor speed
• # of processors?
• compiler/linker optimization?
Estimation of Time Complexity
• Experimental Approach
• Theoretical Approach
Experimental Approach
• Write a program to implement the algorithm.
• Run this program with inputs of varying size and composition.
• Get an accurate measure of the actual running time (e.g. system call
date).
• Limitations of Experimental Studies
– The algorithm has to be implemented, which may take a long
time and could be very difficult.
– In order to compare two algorithms, the same hardware and
software must be used.
– Results may not be indicative for the running time on other
inputs that are not included in the experiments.
Theoretical Approach
• Based on high-level description of the
algorithms (Pseudocode), rather than language
dependent implementations.
• Makes possible an evaluation of the algorithms
that is independent of the hardware and
software environments.
Primitive Operations
• The basic computations performed by an algorithm
• Identifiable in pseudocode
• Largely independent from the programming language
• Exact definition not important
• Instructions have to be basic enough and feasible!

• Examples:
– Evaluating an expression
– Assigning a value to a variable
– Calling a method
– Returning from a method
Pseudocode
 High-level description of an algorithm.

 More structured than plain English.

 Less detailed than a program.

 Preferred notation for describing algorithms.

 Hides program design issues.


Pseudocode
• Control flow
– if … then … [else …]
– while … do …
– repeat … until …
– for … do …
• Method declaration
Algorithm method (arg [, arg…])
Input …
Output
• Method call
method (arg [, arg…])
• Return value
return expression
• Expressions
¬ Assignment (equivalent to )
= Equality testing
(equivalent to )
n2 Superscripts and other mathematical formatting allowed
Low Level Algorithm Analysis
• Based on primitive operations (low-level
computations independent from the
programming language)
• E.g.:
– Make an addition = 1 operation
– Calling a method or returning from a method = 1 operation
– Index in an array = 1 operation
– Comparison = 1 operation etc.
• Method: Inspect the pseudo-code and count
the number of primitive operations executed by
the algorithm
– or count the no. of times a statement is executed
Example: find the max element of an array

Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
Max  A[0]
for i  1 to n  1 do
if A[i]  Max then
Max  A[i]
return Max
Analysis of insertion sort
 The running time of an algorithm on a particular
input is the number of primitive operations or
“steps” executed.
 Let us adopt the following view:
 A constant amount of time is required to execute
each line of our pseudocode.
 The running time of the algorithm is the sum of
running times for each statement executed.
 Worst-case
 longest running time for an input of size n.
 Best-case
 smallest running time for an input of size n.
Insertion Sort Algorithm
INSERTION-SORT(A)
1 for j = 2 to A.length
2 key = A[j]
3 // Insert A[j] into the sorted sequence A[1…j-
1]
4 i=j-1
5 while i > 0 and A[i] > key
6 A[i+1] = A[i]
7 i=i-1
8 A[i + 1] = key
Problem: Suppose there are 60 students in the class How will
you calculate the number of absentees in the class?

Algorithmic Approach:
1.Count <- 0, absent <- 0, total <- 60
2.REPEAT till all students counted
Count <- Count + 1
3.absent <- total - Count
4.Print "Number absent is:" , absent
Need of Algorithm

1. To understand the basic idea of the problem.


2. To find an approach to solve the problem.
3. To improve the efficiency of existing techniques.
4. To understand the basic principles of designing the algorithms. To
compare the performance of the algorithm with respect to other
techniques.
6. It is the best method of description without describing the
implementation detail.
7. The Algorithm gives a clear description of requirements and goal of
the problem to the designer.
8. A good design can produce a good solution.
9. To understand the flow of the problem.
Algorithms as a technology
• Suppose computers are infinitely fast and computer memory is
inexpensive.
Do we still need to study Algorithm and its analysis ?
• Yes, at least to check that the algorithm terminates and
produces the correct result.
• Of course, computers may be fast, but they are not
infinitely fast.
• And memory may be inexpensive, but it is not free.
• Computing time and memory are therefore bounded
resources.
• The order of growth of the running time of an
algorithm, gives a simple characterization
about
– algorithm’s efficiency and
– relative performance of alternative algorithms
Analysis of Algorithms

• When we analyze algorithms, we should employ


mathematical techniques that analyze algorithms
independently of specific implementations,
computers, or data.

• To analyze algorithms:
– First, we start to count the number of significant
operations in a particular solution to assess its
efficiency.
– Then, we will express the efficiency of algorithms using
growth functions.

32
PERFORMANCE ANALYSIS
Performance Analysis: An algorithm is said to be efficient and fast
if it takes less time to execute and consumes less memory space at run time
is called Performance Analysis.
1. SPACE COMPLEXITY:
The space complexity of an algorithm is the amount of Memory
Space required by an algorithm during the course of execution is called
space complexity. There are three types of space
a) Instruction space : executable program
b) Data space: Required to store all the constant and variable data
space.
c) Environment: It is required to store environment information needed
to resume the suspended space.
2. TIME COMPLEXITY:
The time complexity of an algorithm is the total amount of time
required by an algorithm to complete its execution.
Statement
Steps per Frequency Total
Execution
1 Algorithm 0 - 0
Sum(a,n) 0 -1 0
2 { 1 n+1 1
3 S=0 0; 1 n n+1
4 for I=1 to n do 1 1 n
5 s=s+a[I]; 1 - 1
6 return s; 0 0
7 }
Total 2n+3
The Execution Time of Algorithms

• Each operation in an algorithm (or a program) has a cost.


 Each operation takes a certain amount of time.

count = count + 1;  takes a certain amount of time, but it is constant

A sequence of operations:

count = count + 1; Cost: c1


sum = sum + count; Cost: c2

 Total Cost = c1 + c2

35
The Execution Time of Algorithms (cont.)

Example: Simple If-Statement


Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1

Total Cost <= c1 + max(c2,c3)

CENG 213 Data Structures 36


The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}

Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5


 The time required for this algorithm is proportional to n

37
The Execution Time of Algorithms (cont.)
Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
 The time required for this algorithm is proportional to n2

38
PROOF OF METHODS

 When analyzing an algorithm, the amount of


resources required is usually expressed as a
function of the input size.
 Amount of resources used by an algorithm can be
expressed in the form of recursive formula.
 This mandates the need for the basic mathematical
tools that are necessary to deal with these recursive
formulas in the process of analyzing algorithms.
 A function f is a (binary) relation such that for every
element x ∈ Dom(f ) there is exactly one element y
∈ Ran(f ) with (x, y) ∈ f.
PROOF OF METHODS

 Proofs constitute an essential component in the


design and analysis of algorithms.
 Proofs help in establishing the correctness of an
algorithm and the amount of resources needed by
them such as its computing time and space usage.
NOTATION
A proposition or an assertion P is simply a statement
that can be either true or false, but not both.
 The symbols → and ↔ are used extensively in proofs.
 “→” read implies and “↔” read as “if and only if”.
 If P and Q are two propositions, then the statement P
→ Q stands for “P implies Q” or “if P then Q” and the
statement P ↔ Q stands for “P if and only if Q“ that
is, “P is true if and only if Q is true“.
 Can be broken in P → Q and Q → P.
TYPES OF PROOFS
1. Direct proof
 To prove that P → Q a direct proof works by assuming
that P is true and then deducing the truth of Q from the
truth of P.
 Example:
 if n is an even integer, then n2 is an even integer

2. Indirect proof
 The implication P → Q is logically equivalent to the
contrapositive implication ¬ Q → ¬ P .
 If n2 is an even integer, then n is an even integer.
Example - direct proof

• In general, to prove p  q, assume p and


show that q follows.

((M  C)  (D  C)  (D  S)  (M))  S


?
Example 2: Direct Proof
Theorem:
If n is odd integer, then n2 is odd.

Proof:
Let p --- “n is odd integer”; q --- “n2 is odd”; we want to show that p
q

Assume p, i.e., n is odd.

By definition n = 2k + 1, where k is some integer.

Therefore n2 = (2k + 1)2 = 4k2 + 4k + 1 = 2 (2k2 + 2k ) + 1, which is by


definition an odd number (k’ = (2k2 + 2k ) ).
Example 1: Proof by Contraposition
• Again, p  q  q  p (the contrapositive)
• So, we can prove the implication p  q by first assuming q,
and showing that p follows.
• Example: Prove that if a and b are integers, and a + b ≥ 15,
then a ≥ 8 or b ≥ 8.

Proof strategy:
(a + b ≥ 15)  (a ≥ 8) v (b ≥ 8) Note that negation
of conclusion is
(Assume q) Suppose (a < 8)  (b < 8). easier to start with
(Show p) Then (a ≤ 7)  (b ≤ 7), here.
and (a + b) ≤ 14,
and (a + b) < 15.
QED
3. Proof by contradiction
 To prove that the statement P → Q is true using this
method, we start by assuming that P is true but Q is
false. If this assumption leads to a contradiction, it means
that our assumption that “Q is false" must be wrong, and
hence Q must follow from P.
 Suppose a ∈ ., If a2 is even, then a is even.


Proof by Contradiction
• A – We want to prove p.
• We show that:
(1)¬p  F (i.e., a False statement , say r ¬r)
(2)We conclude that ¬p is false since (1) is True and
therefore p is True.
• B – We want to show p  q
(1)Assume the negation of the conclusion, i.e., ¬q
(2)Use show that (p  ¬q )  F
(3)Since ((p  ¬q )  F)  (p  q) (why?)

49
4. Proof by counterexample
 This method provides quick evidence that a
postulated statement is false.
 When faced with a problem that requires proving or
disproving a given assertion, we may start by trying
to disprove the assertion with a counter example.

 Letf(n) = n2 + n + 41 be a function defined on the set of


nonnegative integers. Consider the assertion that f(n) is
always a prime number.
5. Mathematical Induction
 It is a mathematical proof technique, for proving that
a property holds for a sequence of natural numbers n0,
n0+1, n0+2,…. a form usually done in two steps.
 The first step, called the base case or basis, proves that
the theorem is true for the base case i.e n0.
 The second step, called the inductive step, proves that, if
the theorem is true for a given number, then it is also true
for the next number
 Example:
1. Prove that 1+2+3+4+5+...+ n = (1+n)n/2
2. Show that for all integers greater than zero : 2n >= n+1.
1. Give a Proof by contradiction of the theorem “If 3n+2
is odd, then n is odd”.

2. Prove by the principle of mathematical induction


that 1×1!+2×2!+3×3!+...+n×n!=(n+1)!−1 for all natural numbers n.

3. Prove the following by using the principle of mathematical


induction for all n∈N:
1⋅2+2⋅3+3⋅4+⋯+n(n+1)=n(n+1)(n+2)3
Finding Better algorithm
• One way would be to count the number of primitive
operations at different input sizes.
– Though this is a valid solution, the amount of work this
takes for even simple algorithms does not justify its use
• Other way is to implement both the algorithms and
run the two programs on your computer for
different inputs and see which one takes less time.
– Problems with this approach:
• It might be possible that for some inputs, first algorithm performs better
than the second. And for some inputs second performs better.
• It might also be possible that for some inputs, first algorithm
perform better on one machine and the second works better on
other machine for some other inputs
Asymptotic Analysis
• Asymptotic analysis of an algorithms talks about the order of
growth of the running time when input size is large enough.
• In Asymptotic Analysis, we evaluate the performance of an
algorithm in terms of input size (we don’t measure the actual
running time).
• In mathematical analysis, asymptotic analysis, also known
as asymptotics, is a method of describing limiting behavior.
• Asymptotic analysis is useful to
– estimate how long a program will run or/and how much space it would
take.
– compare the efficiency of different algorithms.
– choose an algorithm for an application.
• The main idea of asymptotic analysis is to have a measure of
efficiency of algorithms that don’t depend on machine
specific constants, and don’t require algorithms to be
implemented.
• Asymptotic notations are mathematical tools to represent
the time complexity of algorithms for asymptotic analysis.
• The following asymptotic notations are used to represent
the time complexity of algorithms:
– Big O Notation
– Big Omega - Ω Notation
– Big Θ Notation
– Little o Notation
– Little –Omega: ω Notation
Growth Rate of functions
Consider simple functions: f1(n) = c1n (linear), f2(n) =
c2n2 (quadratic), f3(n) = c3n3 (cubic), f4(n) = c4 log n
(logarithmic), f5(n) = c52n (exponential) and so on. Note:
The slopes of the functions can easily be seen by
differentiating w.r.t. n (considering n as a
continuous variable, of course)
No matter what the constants are, for sufficiently
large values of n, the growth rate of quadratic
functions is greater than that of liner functions
and will be bigger than them. Informally
speaking, the fact is expressed as “quadratic
functions are asymptotically bigger than linear
functions and so on [“asymptotically” means
when the argument of the function is very large]
Constant factors and lower order terms do not
affect the growth rates, e.g., 4n2 and n2 + 2n are
both quadratic functions of n
Useful Algebra (Logarithms)
Logarithms and Exponents: logba = c if a =bc. As is the custom in the computing literature,
we omit writing the base b of the logarithm when b = 2. For example, log 1024 = 10 (or, lg
1024 =10, binary logarithm). We need to remember the following simple rules (remember
how to prove them):
logbac = logba + logbc; logba/c = logba – logbc; logk(n) = (log n)k
logbac = clogba;
logba = (logca)/(logcb); blogc a  alogc b ; log(log n)=log log
(ba)c = bac; babc(n)
=b ;
a+c b /b
a c
= ba-c
;
logb(1/a) = - logba; logba = 1/logab; alogb c  clogb
a
For all n and a ≥ 1, an is monotonically increasing in n (we normally use 00 = 1). We relate the rates of
growth of polynomials and exponentials by the following fact. For all real constants a and b such that a
> 1, lima→∞(nb/an) = 0 [Use L’Hospital rule to prove if you want]; we conclude nb = o(an); thus, any
exponential function with a base strictly greater than 1 grows faster than any polynomial function.
Consider some interesting cases in algorithm analysis (they all can be easily shown to be true
using the above rules)
l o g ( 2 n l o g n )  1  l o g n  l o g l o g n; l o g ( n / 2 )  l o g n 1; log n  log
n 2
n  l o g l o g n 1; log 4 n  l o g log2 
n
l o g log 2log n  2 2 log n  n 2
2 n;
n;n n;
4
4 n  2 2 n ; n 2 2 3log n  n 2 n 3   n
n5 ; 2n 2 ;
Ceiling and Floor
x = smallest integer greater than or equal to x (“ceiling”) and x = the largest
integer less than or equal to x (“floor”). Note, for any real x,
x – 1 < x ≤ x ≤ x < x
For any integer n,
n + n = n.
Also note that for any real number x ≥ 0 and integers m, n > 0

 / n  
 x   x , x  n   n  (m 1) ,  n   n  (m
 m   nm   m  )m
1 m  m
/ n   x ,
nm  m
 
 
Big O Notation
• The Big O notation defines an upper bound of an
algorithm. It bounds a function only from above.
• Suppose for a given function f(n) , g(n) is the Big-O
order when there exist positive constants c and n0
such that 0 <= f(n) <= cg(n) for all n >= n0.
Big Ω Notation
• Just as Big O notation provides an asymptotic upper bound on a
function, Ω notation provides an asymptotic lower bound. It
can be useful when we have lower bound on time complexity of
an algorithm.
• Best case performance of an algorithm is generally not useful,
the Omega notation is the least used notation among all.
• Suppose for a given function f(n) , g(n) is the Big- Ω order when
there exist positive constants c and n0 such that 0 <= cg(n) <=
f(n) for all n >= n0.
Big Θ Notation
• The theta notation bounds a functions from above
and below, so it defines exact asymptotic behavior.
• Suppose for a given function f(n) , g(n) is the Big- Θ
order when there exist positive constants c1 and c2
and n0 such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for
all n >= n0.
• When we use big-Θ notation, we re saying that we
have an asymptotically tight bound on the running
time.
Common time complexities
BETTER
• O(1) constant time
• O(log n) log time
• O(n) linear time
• O(n log n) log linear time
• O(n2) quadratic time
WORSE • O(n3) cubic time
• O(2n) exponential time

63
LITTLE O NOTATION
• Suppose for a given function f(n) , g(n) is the Little- o
order when for any positive constants c there exists
a positive constant n0 such that 0 ≤ f(n) < cg(n) for all
n >= n0.
• Little- o is an upper bound, but is not
an asymptotically tight bound.
• The main difference between O- notation and o-
notation is that in f (n) = O(g(n)), the bound 0<=f
(n)<= cg(n) holds for some constant c > 0, but in f(n)
=o(g(n)), the bound 0 < = f(n) < cg(n) holds for all
constants c > 0
• Example:
– For example, 2n = o(n2), but 2n2 ≠ o(n2).
• Intuitively, in o-notation, the function f (n) becomes
insignificant relative to g(n) as n approaches infinity;
that is,
Little –Omega: ω Notation

• ω -notation is used to denote a lower bound that is


not asymptotically tight.
• Suppose for a given function f(n) , g(n) is the Little
omega ω order when for any positive constants c
there exists a positive constant n0 > 0 such that 0 ≤
cg(n) < f(n) for all n >= n0.
• The relation f(n)= ω(g(n)) implies

• The function g (n) becomes insignificant relative to


f(n) as n approaches infinity.
Points to be Noted
• Big-O notation gives an asymptotic upper bound:
– For example, it is absolutely correct to say that binary
search runs in O(n) time.
– Other, imprecise, upper bounds on binary search
would be O(n2 ), O(n3 ) and O(2n ). But none of Θ(n),
Θ(n2 ), Θ(n3 ), and Θ(2n ) would be correct to describe
the running time of binary search in any case.
• Also big-Ω notation gives an asymptotic lower
bound:
– you can also say that the running time of insertion sort
is Ω(1).
• Which of the following statements is/are
valid?
1. Time Complexity of QuickSort is Θ(n^2)
2. Time Complexity of QuickSort is O(n^2)
3. For any two functions f(n) and g(n), we have
f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n)
= Ω(g(n)).
4. Time complexity of all computer algorithms
can be written as Ω(1)
• For the functions, nk and cn , what is the
asymptotic relationship between them.
Assume that k >= 1 and c > 1 are constants.
Choose all answers that apply:
• For the functions, lgn and log8n, what is the
asymptotic relationship between these
functions?
– lgn is O(log8n)
– lgn is Ω(log8n)
– lgn is Θ(log8n)
• List the following from slowest to fastest
growing
1. Θ(1)
2. Θ(n2)
3. Θ(2n )
4. Θ(lgn)
5. Θ(n)
6. Θ(n2 lg n)
7. Θ(n lg n)
Big-Oh Rules
• If f(n) is a polynomial of degree d, then f(n) is O(nd), i.e.,
1. Drop lower-order terms
2. Drop constant factors
• Use the smallest possible class of functions
– Say “2n is O(n)” instead of “2n is O(n2)”
• Use the simplest expression of the class
– Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
Computing Prefix Averages
• We further illustrate asymptotic analysis with
two algorithms for prefix averages
• The i-th prefix average of an array X is
average of the first (i + 1) elements of X:
A[i] = (X[0] + X[1] + … + X[i])/(i+1)
Exercise: Give a big-Oh characterization

Algorithm Ex1(A, n)
Input an array X of n integers
Output the sum of the elements in A

s  A[0]
for i  0 to n  1 do
s  s + A[i]

return s

81
Exercise: Give a big-Oh characterization
Algorithm Ex2(A, n)
Input an array X of n integers
Output the sum of the elements at even cells in A

s  A[0]
for i  2 to n  1 by increments of 2 do
s  s + A[i]
return s

82
Exercise: Give a big-Oh characterization
Algorithm Ex1(A, n)
Input an array X of n integers
Output the sum of the prefix sums A
s0
for i  0 to n  1 do
s  s + A[0]
for j 1 to i do
s  s + A[j]
return s

83
Summary
• Time complexity is a measure of algorithm
efficiency.
• Efficient algorithm plays the major role in
determining the running time.
• Minor tweaks in the code can cut down the
running time by a factor too.
• Other items like CPU speed, memory speed,
device I/O speed can help as well.
• For certain problems, it is possible to allocate
additional space & improve time complexity.
Complexity Θ(n)
ALGORITHM DESIGN TECHNIQUES

 Techniques that provides the construction


of efficient solutions to problems.
 They provide templates suited to solve a
broad range of diverse problems
 Brute Force
 Divide-and-conquer
 Dynamic Programming
 Greedy Techniques
 Brute force
Selection sort, Brute-force string matching, Convex hull problem
Exhaustive search: Traveling salesman, Knapsack, and
Assignment problems
 Divide-and-conquer
Mergesort, Quicksort, Binary Search, Strassen’s Matrix
Multiplication.
 Dynamic programming
Warshall’s algorithm for transitive closure
Floyd’s algorithms for all-pairs shortest paths
 Greedy techniques
MST problem: Prim’s algorithm, Kruskal’s algorithm
Dijkstra’s algorithm for single-source shortest path
problem,Huffman tree and code
Brute Force
 Brute force is a straightforward approach to solve a problem
based on the problem’s statement and definitions of the
concepts involved.
 It is considered as one of the easiest approaches to apply and
is useful for solving small–size instances of a problem.
 Brute force is important due to its wide applicability and
simplicity.
Divide-and-conquer
 Given an instance of the problem to be solved, split this into
several smaller sub-instances (of the same problem).
 independently solve each of the sub-instances.
 and then combine the sub-instance solutions so as to yield a
solution for the original instance.
Dynamic programming
 Dynamic programming, like the divide-and-conquer method,
solves problems by combining the solutions to sub problems.
 A dynamic-programming algorithm solves each subsubproblem
just once and then saves its answer in a table, thereby avoiding
the work of recomputing the answer every time it solves each
subsubproblem.

Greedy Algorithm
 A greedy algorithm is an algorithmic paradigm that follows the
problem solving heuristic of making the locally optimal choice at
each stage with the hope of finding a global optimum.

DIVIDE-AND-CONQUER
In this paradigm a problem is solved recursively, applying three steps at each level of the
recursion
 Divide: The problem is divided into a number of sub problems that are smaller instances
of the same problem.
 Conquer: The sub problems are conquered by solving them recursively
 Combine: The solutions are combined to the sub problems into the solution for the
original problem.

.
ANALYSIS OF DIVIDE-AND-CONQUER
 Running time of a divide-and-conquer algorithm is calculated from the
three steps of the basic paradigm.
 Let T(n) be the running time on a problem of size n, If the problem
size is small enough, say n for some constant c, the straightforward
solution takes constant time, which we write as Ɵ(1).
 Suppose that our division of the problem yields a subproblem, each of
which is 1/b the size of the original. So the time would be a.T(n/b) to
solve a subproblems.
 If we take D(n) time to divide the problem into subproblems and C(n)
time to combine the solutions to the subproblems into the solution to
the original problem, then the total time :

.
Binary Search
 Binary search applies the divide-and-conquer paradigm.
 It follows three-steps divide-and-conquer process for
searching in a typical array A[p…r].
 Divide: Partition the array into one subarray either
A1[p…q-1] or A2[q+1…r] depending on the searching
element.
 Conquer: recursive calls to binary search on either A1
or A2.
 Combine: Because the given element is already
searched or not present in the array, no work is needed
to combine them.
Binary Search Algorithm
BinarySearch(array,low,high,key)//where key is the element to be searched
{
if(low==high)
if(a[low]==key)
return low;
else
return -1;
}
else{
mid=(low+high)/2
if(key==array[mid])
return mid;
else if(key>array[mid])
return BinarySearch(array,mid+1,high,key)
else
return BinarySearch(array,low,mid-1,key)
}
}
Binary Search Algorithm Analysis

(running time of a divide-and-conquer algorithm)

The recurrence relation for the running time of the


method is:
T(1) = a if n = 1 (one element array)
T(n) = T(n / 2) + b if n > 1
Analysis Of Recursive Binary Search (Cont’d)
Without loss of generality, assume n, the problem size, is a multiple of 2, i.e., n = 2k
Expanding:
T(1) = a (1)
T(n) = T(n / 2) + b (2)
= [T(n / 22) + b] + b = T (n / 22) + 2b by substituting
T(n/2) in (2)
= [T(n / 23) + b] + 2b = T(n / 23) + 3b by substituting
T(n/22) in (2)
= ……
= T( n / 2k) + kb
The base case is reached when n / 2k = 1  n = 2k  k = log2 n, we
then have:
T(n) = T(1) + b log2 n
= a + b log2 n
Therefore, Recursive Binary Search is O(log n)
Analysis Of Recursive Binary Search (Cont’d)
Without loss of generality, assume n, the problem size, is a multiple of 2, i.e., n = 2k

Expanding:
T(1) = a (1)
T(n) = T(n / 2) + b (2)
= [T(n / 22) + b] + b = T (n / 22) + 2b by substituting T(n/2) in (2)
= [T(n / 23) + b] + 2b = T(n / 23) + 3b by substituting T(n/22) in (2)
= ……
= T( n / 2k) + kb

The base case is reached when n / 2k = 1  n = 2k  k = log2 n, we then


have:

T(n) = T(1) + b log2 n


= a + b log2 n

Therefore, Recursive Binary Search is O(log n)


QUICK SORT
 Quicksort applies the divide-and-conquer paradigm.
 It follows three-steps divide-and-conquer process for
sorting a typical subarray A[p…r].
 Divide: Partition (rearrange) the array A[p…r] into two
(possibly empty) subarrays A[p…q-1] and A[q+1…r] such
that each element of A[p…q-1] is less than or equal to A[q]
which is, in turn, less than or equal to each element of
A[q+1…r]. Compute the index q as part of this partitioning
procedure.
 Conquer: Sort the two subarrays and A[p…q-1] and
A[q+1…r] by recursive calls to quicksort.
 Combine: Because the subarrays are already sorted, no
work is needed to combine them: the entire array A[p…r]
is now sorted.
Partitioning the Array
Alg. PARTITION (A, p, r)
1. x  A[p] p r

2. i  p – 1 A: 5 3 2 6 4 1 3 7

3. j  r + 1
i
4. while TRUE A[p…q]
j
≤ A[q+1…r]
5. do repeat j  j – 1
6. until A[j] ≤ x A: ap ar
7. do repeat i  i + 1
8. until A[i] ≥ x j=q i
9. if i < j
10. then exchange A[i]  A[j] Each element is
11. else return j visited once!
Running time: (n)
n=r–p+1
101
Recurrence
Alg.: QUICKSORT(A, p, r) Initially: p=1, r=n

if p < r

then q  PARTITION(A, p, r)

QUICKSORT (A, p, q)

QUICKSORT (A, q+1, r)

Recurrence:
T(n) = T(q) + T(n – q) + n
102
Performance of Quicksort
• Average case
– All permutations of the input numbers are equally likely
– On a random input array, we will have a mix of well balanced and
unbalanced splits
– Good and bad splits are randomly distributed across throughout the
tree
partitioning cost:
n combined partitioning cost: n n = (n)
1 n-1 2n-1 = (n)
(n – 1)/2 + 1 (n – 1)/2
(n – 1)/2 (n – 1)/2

Alternate of a good Nearly well


and a bad split balanced split

• Running time of Quicksort when levels alternate


between good and bad splits is O(nlgn)
103
Merge Sort
 Merge sort applies the divide-and-conquer paradigm.
 It follows three-steps divide-and-conquer process for sorting an
array.
 Divide: Divide the n-element sequence to be sorted into two
subsequences of n/2 elements each. The divide step just
computes the middle of the subarray, which takes constant
time. Thus, D(n)=Ɵ(1).
 Conquer: We recursively solve two subproblems, each of size
n/2.
 Combine: Merge the two sorted subsequences to produce the
sorted answer. MERGE procedure on an n-element subarray
takes time Ɵ(n), and so C(n)=Ɵ(n).
Merge-Sort Analysis
n

n/2 n/2

n/4 n/4 n/4 n/4

2 2 2
MERGE-SORT Running Time
• Divide:
– compute q as the average of p and r: D(n) = (1)
• Conquer:
– recursively solve 2 subproblems, each of size n/2
 2T (n/2)
• Combine:
– MERGE on an n-element subarray takes (n) time
 C(n) = (n)
(1) if n =1
T(n) = 2T(n/2) + (n) if n > 1

109
Solve the Recurrence
T(n) = c if n = 1
2T(n/2) + cn if n > 1
Use Master’s Theorem:

Compare n with f(n) = cn


Case 2: T(n) = Θ(nlgn)

110
Merge-Sort Time Complexity
If the time for the merging operation is proportional to n, then the
computing time for merge sort is described by the recurrence relation

c1 n=1, c1 is a constant
T(n) =
2T(n/2) + c2n n>1, c2 is a constant

Assume n=2k, then


T(n) =2T(n/2) + c2n
=2(2T(n/4)+c2n/2)+cn
=4T(n/4)+2c2n

…..
…..
=2k T(1)+ kc2n
= c1n+c2nlogn = = O(nlogn)
Summary
• Merge-Sort
– Most of the work was done in combining the
solutions.
– Best case takes o(n log(n)) time
– Average case takes o(n log(n)) time
– Worst case takes o(n log(n)) time
• Advantages of Divide and Conquer Algorithms
• 1.**Efficiency:** Divide and conquer can lead to efficient algorithms for solving
complex problems. By breaking down the problem into smaller parts, each part can
be solved independently and potentially in parallel, reducing the overall time
complexity.
• 2. **Simplicity:** The approach simplifies complex problems by breaking them into
smaller, well-defined sub-problems. This can make the problem-solving process
more manageable and easier to understand.
• 3. **Modularity:** Divide and conquer promotes modular design. Each sub-problem
can be solved independently, making the codebase more organized and easier to
maintain.
• 4. **Reusability:** The sub-problems created during the division phase can often be
reused in different contexts or for solving similar problems, leading to code
reusability.
• 5. **Optimization Opportunities:** Optimization techniques can be applied to
individual sub-problems, improving the efficiency of the solution overall.
• Disadvantages of Divide and Conquer Algorithm
• 1. **Overhead:** The division and combination phases of the approach may introduce some
overhead due to the need for additional calculations and merging of results.
• 2. **Complexity of Implementation:** In some cases, implementing the divide and conquer
approach might be more complex than using simpler algorithms. This complexity can lead to
errors if not implemented correctly.
• 3. **Memory Usage:** Divide and conquer algorithms may require additional memory for storing
intermediate results or dividing the problem into sub-problems. This can be a concern for
problems with large input sizes.
• 4. **Suboptimal Solutions:** In some cases, the division of the problem may not lead to the
most optimal solution. Poorly chosen divisions or a mismatch between sub-problems can lead
to suboptimal results.
• 5. **Recursion Overhead:** Many divide and conquer algorithms are implemented using
recursion, which can introduce overhead and potentially lead to stack overflow issues for very
deep recursion levels.
BRUTE FORCE
 Brute force - the simplest of the design strategies
 is a straightforward approach to solving a problem,
usually directly based on the problem’s statement and
definitions of the concepts involved.
 the brute-force strategy is the easiest to apply.
 Brute force is important due to its wide applicability and
simplicity.
 Weakness is the subpar efficiency of most brute-force
algorithms.
 Important Examples:
Selection sort, Brute-force string matching, Convex hull
problem
Exhaustive search: Traveling salesman, Knapsack, and
Assignment problems
Brute Force
A straightforward approach, usually based directly on the
problem’s statement and definitions of the concepts
involved
Examples – based directly on definitions:
1. Computing an (a > 0, n a nonnegative integer)
2. Computing n!
3. Multiplying two matrices
4. Searching for a key of a given value in a list

116
EXHAUSTIVE SEARCH
Many Brute Force Algorithms use Exhaustive
Search

 Exhaustive search is a brute-force approach to combinatorial


problems.

Approach:
1. Enumerate and evaluate all solutions, and
2. Choose the solution that meets some criteria (eg smallest)
Exhaustive Search – More Detail
A brute force solution to a problem involving search for an element
with a special property, usually among combinatorial objects such
as permutations, combinations, or subsets of a set.

Method:
– generate a list of all potential solutions to the problem in a
systematic manner
– evaluate potential solutions one by one, disqualifying
infeasible ones and, for an optimization problem, keeping track
of the best one found so far

– when the search ends, announce the solution(s) found

118
EXHAUSTIVE SEARCH
 Examples:
 Traveling salesman problem
 Finding the shortest tour through a given set of n cities
that visits each city exactly once before returning to the
city where it started.
 Knapsack problem
 Finding the most valuable list of out-of n items that fit
into the knapsack.
 Assignment problem
 Finding an assignment of n people to execute n jobs
with the smallest total cost.
TRAVELLING SALESMAN PROBLEM
 The traveling salesman problem, (also know as TSP), is
one of the most interesting and difficult problems in
applied statistics.
 It is the problem of finding shortest path available to make
a tour of number of cities such that visiting each city
exactly once and only once and return to the original
starting point.
 Some of the solution methods of TSP include:
 Brute-force method.
 Approximations
• Nearest neighbor
• Greedy approach
 Branch and bound.
AS A GRAPH PROBLEM

 TSP can be modelled as an undirected weighted graph,


such that vertices represent cities, paths are the
graph's edges, and a path's distance is the edge's weight.
 The model is often a complete graph (a graph with N
vertices and an edge between every two vertices).
 It is a minimization problem starting and finishing at same
vertex after having visited each other vertex exactly once.
 The Traveling Salesman Problem (TSP) is the problem
of finding a minimum-weight Hamilton circuit in a
complete graph.
 A Hamilton circuit is a circuit that uses every vertex of
a graph once.
 The brute-force method is to simply generate all
possible tours and compute their distances. The
shortest tour is thus the optimal tour. To solve
TSP using Brute-force method we can use the
following steps.
1. Draw and list all the possible tours
2. calculate the distance of each tour
3. choose the shortest tour, this is the optimal
solution.
OR
1. List of all possible Hamilton Circuits (Tours)
2. Calculate the distance of each circuit found in
Step1
3. Pick the circuit that has the shortest distance.
The brute force method explores all possible routes (permutations)
between cities, calculates the total distance for each route, and selects
the shortest one. It guarantees an optimal solution but is inefficient.

•Time Complexity: O(n!), where n is the number of cities.

•Advantages: Guarantees the optimal solution.

•Disadvantages: Extremely slow for large datasets due to factorial


growth in possible routes.
Example 1: Traveling Salesman Problem

• Given n cities with known distances between each pair, find the shortest
tour that passes through all the cities exactly once before returning to the
starting city
• More formally: Find shortest Hamiltonian circuit in a weighted connected
graph
• Example:
2
a b
5 3
8 4

c d
7

125
TSP by Exhaustive Search
Tour Cost
a→b→c→d→a 2+3+7+5 = 17
a→b→d→c→a 2+4+7+8 = 21
a→c→b→d→a 8+3+4+5 = 20
a→c→d→b→a 8+7+4+2 = 21
a→d→b→c→a 5+4+3+8 = 20
a→d→c→b→a 5+7+3+2 = 17
Have we considered all tours?
Do we need to consider more?
Any way to consider fewer?
Efficiency: Number of tours = number of …

126
TSP by Exhaustive Search
Tour Cost
a→b→c→d→a 2+3+7+5 = 17
a→b→d→c→a 2+4+7+8 = 21
a→c→b→d→a 8+3+4+5 = 20
a→c→d→b→a 8+7+4+2 = 21
a→d→b→c→a 5+4+3+8 = 20
a→d→c→b→a 5+7+3+2 = 17
Have we considered all tours? Start elsewhere: b-c-d-a-b
Do we need to consider more? No
Any way to consider fewer? Yes: Reverse
Efficiency: # tours = O(# permutations of b,c,d) = O(n!)

127
PSEUDO CODE OF TSP

1. Get an initial tour; call it T


2. best_tour ⇦ T
3. best_score ⇦ score(T)
4. while there are more permutations of T do the following
4.1. generate a new permutation of T
4.2. if score(T) < best_score then
4.2.1. best_tour ⇦ T
4.2.2. best_score ⇦ score(T)
5. print best_tour and best_score
Greedy Approach
Algorithm
• Travelling salesman problem takes a graph G {V, E} as an
input and declare another graph as the output (say G) which
will record the path the salesman is going to take from one
node to another.
• The algorithm begins by sorting all the edges in the input
graph G from the least distance to the largest distance.
• The first edge selected is the edge with least distance, and one
of the two vertices (say A and B) being the origin node (say
A).
• Then, among the adjacent edges of the node other than the
origin node (B), find the least cost edge and add it onto the
output graph.
• Continue the process with further nodes, ensuring there are
no cycles in the output graph and the path returns to the
origin node A.
• However, if the origin is mentioned in the given problem, then
the solution must always start from that node only..
The shortest path that originates and
ends at A is A → B → C → D → E → F →
A
The cost of the path is: 16 + 21 + 12
+ 15 + 16 + 34 = 114.
KNAPSACK PROBLEM

A thief robbing a store finds n items. The i th item is worth


Vi dollars and weighs Wi pounds, where Vi and Wi are
integers. The thief wants to take as valuable a load as
possible, but he can carry at most W pounds in his knapsack,
for some integer W . Which items should he take?
 This problem has two versions:
 0-1 knapsack Problem: For each item, the thief must
either take an item or leave it behind; he cannot take a
fractional amount of an item or take an item more than
once
 Fractional knapsack problem: The thief can take
fractions of items, rather than having to make a binary
(0-1) choice for each item
KNAPSACK PROBLEM(AN ALTERNATE FORM)

 It is concerned with a knapsack that has positive integer


volume (or capacity) V. There are n distinct items that may
potentially be placed in the knapsack. Item i has a positive
integer volume Vi and positive integer benefit Bi . In
addition, there are Qi copies of item i available, where
quantity Qi is a positive integer satisfying 1<= Qi <=∞ .
 Let Xi determines how many copies of item i are to be
placed into the knapsack. The goal is to:
Maximize
𝑛

𝐵𝑖𝑋𝑖
𝑖=1
Subject to constraint
∑𝑛𝑖=1 𝑉𝑖𝑋𝑖 <=V
And
0 <=𝑋𝑖<= Q𝑖.
BRUTE FORCE SOLUTION OF KNAPSACK PROBLEM

 Consider all the subsets of the set of n items given.


 Computing the total weight of each subset in order to
identify feasible subsets (the ones with the total not
exceeding the knapsack’s capacity).
 Finding a subset of the largest value among them.
KNAPSACK PROBLEM

 Example

Subset Total Weight Total Value


…… ………….. …………
…... ………….. …………
Example 2: Knapsack Problem
Given n items:
– weights: w1 w2 … wn
– values: v1 v2 … vn
– a knapsack of capacity W
Find most valuable subset of the items that fit into the
knapsack
Example: Knapsack capacity W=16
item weight value
1 2 $20
2 5 $30
3 10 $50
4 5 $10
Knapsack Problem by Exhaustive Search
Subset Total weight Total value
{1} 2 $20
{2} 5 $30
{3} 10 $50
{4} 5 $10
{1,2} 7 $50
{1,3} 12 $70
{1,4} 7 $30
{2,3} 15 $80
{2,4} 10 $40
{3,4} 15 $60
{1,2,3} 17 not feasible
{1,2,4} 12 $60
{1,3,4} 17 not feasible
{2,3,4} 20 not feasible
{1,2,3,4} 22 not feasible Efficiency: Ω(2^n)
KNAPSACK PROBLEM
(ANALYSIS OF BRUTE FORCE SOLUTION)

 Since there are n items, there are 2n possible


combinations of items.
 We go through all combinations and find the one
with the maximum value and with total weight less
or equal to Knapsack weight.
 Running time will be O(2n).
ASSIGNMENT PROBLEM
 Given n agent to be assigned to execute n jobs, one agent
per job, find an assignment with the smallest total cost.
 C[i, j] is the cost for the ith agent assigned to the jth
job.
 Example: A company has 4 machines available for
assignment to 4 tasks. Any machine can be assigned to
any task, and each task requires processing by one
machine. The time required by the machine for the
processing of each task is given in the table below.

 The company wants to minimize the total processing time


needed of all four tasks.
ASSIGNMENT PROBLEM
 A small instance of the problem:

 Cost Matrix of the above problem


<1, 2, 3, 4> cost = 9 + 4 + 1 + 4 = 18 <2, 1, 3, 4> cost = 2 + 6 + 1 + 4 = 13 (Min)
<1, 2, 4, 3> cost = 9 + 4 + 8 + 9 = 30 <2, 1, 4, 3> cost = 2 + 6 + 8 + 9 = 25
<1, 3, 2, 4> cost = 9 + 3 + 8 + 4 = 24 <2, 3, 1, 4> cost = 2 + 3 + 5 + 4 = 14

<1, 3, 4, 2> cost = 9 + 3 + 8 + 6 = 26 <2, 3, 4, 1> cost = 2 + 3 + 8 + 7 = 20


<1, 4, 2, 3> cost = 9 + 7 + 8 + 9 = 33 <2, 4, 1, 3> cost = 2 + 7 + 5 + 9 = 23

<1, 4, 3, 2> cost = 9 + 7 + 1 + 6 = 23 <2, 4, 3, 1> cost = 2 + 7 + 1 + 7 = 17, etc

We can describe feasible solutions to the assignment problem as n-tuples


<j1, . . . , jn > in which the ith component, 𝑖 = 1, . . . , 𝑛, indicates the column of the
element selected in the ith row (i.e., the job number assigned to the ith person). For
example, for the cost matrix above, <2, 3, 4, 1> indicates the assignment of Person 1
to Job 2, Person 2 to Job 3, Person 3 to Job 4, and Person 4 to
Job 1. Similarly we can have 4! = 4 · 3 · 2 · 1 = 24, 𝑖. 𝑒. , 24 permutations.
ASSIGNMENT PROBLEM

 An instance of the assignment problem is completely


specified by its cost matrix C.
 Select one element in each row so that all selected
elements are in different columns and the total sum of the
selected elements is the smallest possible.
 Generate all the permutations of integers 1,2, … n,
computing the total cost of each assignment by summing
up the corresponding elements of the cost matrix, and
finally select the one with smallest sum.
Brute-Force Strengths and Weaknesses
• Advantages:
• Simple to understand and implement.
• Guaranteed to find a solution if one exists.
• Works well for small datasets.

• Disadvantages:
• Can be extremely slow for large datasets.
• Exponential time complexity can lead to impractical
runtimes.
• Not suitable for real-time applications.
• Requires large memory.
• May require significant computational resources.

. 141

You might also like