SlideShare a Scribd company logo
1
Computer Algorithms
Segment 1
References:
1. Introduction to Algorithms- Cormen
2. Computer Algorithms-Shanny
3. Algorithms-S. Dasgupta
2
Definition
• Algorithm is considered as one of the fundamental concepts in c
omputer science
• Informally
1) An algorithm is a set of steps that define how a task is perform
ed
-Examples include algorithms for constructing model airplanes,
for operating washing machines and for playing music.
2) An algorithm is any well-defined computational procedure tha
t takes some value, or set of values, as input and produces some
value or set of values as output. An algorithm is thus a sequence
of computational steps that transform the input into the output.
- Example of this type is a sorting problem (i.e. sort a sequence
of numbers into non-decreasing order)
What kinds of problems are solved by algorith
ms?
3
Sorting is by no means the only computational problem for
which algorithms have been developed. Practical applicati
ons of algorithms are ubiquitous and include the following
examples:
• The Human Genome Project has made great progress toward th
e goals of identifying all the 100,000 genes in human DNA, deter
mining the sequences of the 3 billion chemical base pairs that ma
ke up human DNA, storing this information in databases, and dev
eloping tools for data analysis. Each of these steps requires sophis
ticated algorithms.
4
What kinds of problems are solved by algorith
ms?
• The Internet enables people all around the world to quickly acces
s and retrieve large amounts of information. With the aid of clever
algorithms, sites on the Internet are able to manage and manipulate
this large volume of data. Examples of problems that make essentia
l use of algorithms include finding good routes on which the data w
ill travel and using a search engine to quickly find pages on which
particular information resides.
• Electronic commerce enables goods and services to be negotiated a
nd exchanged electronically, and it depends on the privacy of person
al information such as credit card numbers, passwords, and bank stat
ements. The core technologies used in electronic commerce include
public-key cryptography and digital signatures, which are based on n
umerical algorithms and number theory.
5
What kinds of problems are solved by algorith
ms?
• Manufacturing and other commercial enterprises often need to alloca
te scarce resources in the most beneficial way.
• An oil company may wish to know where to place its wells in order t
o maximize its expected profit.
• A political candidate may want to determine where to spend money b
uying campaign advertising in order to maximize the chances of winni
ng an election.
• An airline may wish to assign crews to flights in the least expensive
way possible, making sure that each flight is covered and that governm
ent regulations regarding crew scheduling are met.
• An Internet service provider may wish to determine where to place ad
ditional resources in order to serve its customers more effectively. All
of these are examples of problems that can be solved using linear progr
amming,
6
What kinds of problems are solved by algorith
ms? (Some Specific Problems)
• We are given a road map on which the distance between each pair
of adjacent intersections is marked, and we wish to determine the sh
ortest route from one intersection to another. The number of possibl
e routes can be huge, even if we disallow routes that cross over the
mselves. How do we choose which of all possible routes is the short
est?
• We are given two ordered sequences of symbols, X =<x1, x2,…, x
m> and Y=<y1, y2, …,yn>, and we wish to find a longest common
subsequence of X and Y . A subsequence of X is just X with some (
or possibly all or none) of its elements removed.
• We are given n points in the plane, and we wish to find the conve
x hull of these points.
7
• Formally
- An algorithm is an ordered set of unambiguous, exec
utable steps, defining a terminating process
• Question
• In what sense do the steps described by the following li
st of instructions fail to constitute an algorithm?
Step 1. Take a coin out of your pocket and put it on the
table
Step 2. Return to Step 1
• A machine-compatible representation of an algorithm is cal
led a program
Definition
8
Criteria/Properties
•All algorithms must satisfy the following criteria:
1. Input: Zero or more quantities are externally supplied.
2. Output: At least one quantity is produced.
3. Definiteness: Each instruction is clear and umabiguous.
4. Finiteness: If we trace out the instructions of an algorith
m, then for all cases, the algorithm terminates after a finite
number of steps.
5. Effectiveness: Every instruction must be very basic so t
hat it can be carried out, in principle, by a person using on
ly pencil and paper. It must be feasible.
•Algorithms that are definite and effective are also called computational proced
ures. One important example of computational procedures is the operating syste
m of a digital computer.
9
Areas of Study
• The study of algorithms includes many important and ac
tive areas of research. There are four distinct areas of stu
dy one can identify
1. How to devise algorithms: an understanding of the algorith
mic structures and their representation in the form of Pseudocode
or flowcharts.
- algorithm discovery - problem solving approaches/techniques
2. How to validate algorithms: determining the correctness
3. How to analyze algorithms: determining the time and st
orage of an algorithm requires.
4. How to test a program: debugging
• Algorithm specification
- Pseudocode: algorithms are represented with precisely d
efined textural structure
- Flowcharts
10
• The word algorithm comes from the name of a Persian author
, Abu Ja’far Mohammad ibn Musa al Khowarizmi, who wrot
e a text book on mathematics.
• Began as a subject in mathematics.
• Greek mathematician Euclid invented an algorithm for fi
nding the greatest common divisor (gcd) of two positive
integer (between 400 and 300 B.C.) - meaningful algorit
hm ever discovered.
• Weaving loom invented in 1801 by a Frenchman, Joseph
Jacquard
• Charles Babbage, English mathematician - the difference
engine and the analytical engine, capable of executing al
gorithms
History
11
Algorithms as a technology
• Suppose computers were infinitely fast and computer memory was fre
e. Would you have any reason to study algorithms?
• Of course, computers may be fast, but they are not infinitely fast. And
memory may be inexpensive, but it is not free. Computing time is ther
efore a bounded resource, and so is space in memory. You should use
these resources wisely, and algorithms that are efficient in terms of ti
me or space will help you do so.
• Analysis of algorithm or performance analysis refers to the task of det
ermining how mach computing time and storage an algorithm requires
. Generally, by analyzing several candidate algorithm for a problem, a
most efficient one can be easily identified.
12
Algorithms as a technology
Efficiency
• Different algorithms devised to solve the same problem often
differ dramatically in their efficiency. These differences can be
much more significant than differences due to hardware and sof
tware.
we will see two algorithms for sorting. The first, insertion sort
, takes time roughly equal to to sort n items, where c1
is a constant that does not depend on n. The second, merge sor
t, takes time roughly equal to , where c2 is another c
onstant that also does not depend on n. Insertion sort typically
has a smaller constant factor than merge sort, so that c1 < c2. L
et us fit a faster computer (computerA) running insertion sort a
gainst a slower computer (ComputerB) running merge sort. Ea
ch computer sort an array of 10 million numbers.
2
1n
c
n
n
c 2
2 log
13
Suppose that computer A executes 10 billion instructions per sec
ond and computer B executes only 10 million instructions per sec
ond, so that computer A is 1000 times faster than computer B. Su
ppose insertion sort solved in time and merge sort solved in
time.
Computer A takes 20,000 seconds (more than 5.5 hours) while c
omputer B takes ≈1163 seconds (less than 20 minutes). Computer
B runs more than 17 times faster than computer A!
The example above shows that we should consider algorithms, like co
mputer hardware, as a technology. Total system performance depends
on choosing efficient algorithms as much as on choosing fast hardwar
e. Just as rapid advances are being made in other computer technologi
es, they are being made in algorithms as well.
Algorithms as a technology
2
2n
n
n 2
log
50
14
Time needed by algorithms of diff time
functions for diff problem sizes
Assuming 1m operations/sec.
Size
of n
n n2 n3 2n n! nn
1 0.000001 0.00000
1
0.00000
1
0.00000
2
0.000001 0.000001
5 0.000005 0.00002
5
0.00012
5
0.00003
2
0.000120 0.003125
10 0.00001 0.0001 0.001 0.00102
4
3.6288 2.778 hrs
20 0.00002 0.0004 0.008 1.04858 78218 yrs 3.371012
yrs
50 0.00005 0.0025 0.125 36.1979
yrs
9.771060
yrs
2.871070
yrs
100 0.0001 0.01 1.00 41017 y
rs
310143 yr
s
3.210183
yrs
15
Insertion Sort
We start with insertion sort, which is an efficient algorithm f
or sorting a small number of elements.
16
Insertion Sort (Cont.)
17
Insertion Sort (Cont.)
18
Loop invariants and the correctness of
insertion sort
Figure 2.2 shows how this algorithm works for A =<5; 2; 4; 6;
1; 3>. The index j indicates the “current card” being inserted in
to the hand. At the beginning of each iteration of the for loop,
which is indexed by j , the subarray consisting of elements A
[1.. j-1] constitutes the currently sorted hand, and the remaining
subarray A[j+1, n] corresponds to the pile of cards still on the t
able. In fact, elements A[1…j-1] are the elements originally in
positions 1 through j-1, but now in sorted order. We state these
properties of A[1… j-1] formally as a loop invariant.
19
Loop invariants and the correctness of
insertion sort
We use loop invariants to help us understand why an algori
thm is correct. We must show three things about a loop inv
ariant:
Initialization: It is true prior to the first iteration of the loop.
Maintenance: If it is true before an iteration of the loop, it rem
ains true before the next iteration.
Termination: When the loop terminates, the invariant gives us
a useful property that helps show that the algorithm is correct.
Let us see how these properties hold for insertion sort.
20
Correctness of insertion sort
Initialization: We start by showing that the loop invariant holds befo
re the first loop iteration, when j=2. The subarray A[1.. j-1], therefore
, consists of just the single element A[1], which is in fact the original
element in A[1]. Moreover, this subarray is sorted, which shows that
the loop invariant holds prior to the first iteration of the loop.
Maintenance: Next we tackle the second property: showing that eac
h iteration maintains the loop invariant. Informally, the body of the fo
r loop works by moving A[j-1], A[j -2], A[j-3], and so on by one posi
tion to the right until it finds the proper position for A[j] (lines 4–7),
at which point it inserts the value of A[j] (line 8). The subarray A[1
… j-1] then consists of the elements originally in A[1… j], but in sort
ed order. Incrementing j for the next iteration of the for loop then pre
serves the loop invariant.
21
Termination: Finally, we examine what happens when the loop ter
minates. The condition causing the for loop to terminate is that
j >A.length=n. Because each loop iteration increases j by 1, we mu
st have j=n+1 at that time. Substituting n+1 for j in the wording of l
oop invariant, we have that the subarray A[1… n] consists of the el
ements originally in A[1…n], but in sorted order. Observing that th
e subarray A[1…n] is the entire array, we conclude that the entire a
rray is sorted. Hence, the algorithm is correct.
Correctness of insertion sort
22
Pseudocode conventions
Home work: See the page 19 & 20
Rewrite the INSERTION-SORT procedure to sort into noninc
reasing instead of nondecreasing order.
23
Analyzing algorithms
Analyzing an algorithm has come to mean predicting the reso
urces that the algorithm requires. Occasionally, resources such
as memory, communication bandwidth, or computer hardware
are of primary concern, but most often it is computational time
that we want to measure.
Generally, by analyzing several candidate algorithms for a prob
lem, we can identify a most efficient one.
Such analysis may indicate more than one viable candidate, but
we can often discard several inferior algorithms in the process.
24
Level of analysis
• Time taken by an algorithm?
– Number of seconds a program takes to run? No…
– Too machine dependent: not measuring the algorithm
so much as the program, machine, implementation
• Here, not interested in low level machine details
– Want technology-independent answers
• Interested rather in measuring
– Which data take most space
– Which operations take most time in a device-independ
ent way
25
• Notion of elementary operation
– 1 assignment, 1 addition, 1 comparison (test), …
• Determine how many elementary operations al
gorithm performs for inputs of different sizes (
problem sizes), or same size diff. order
• Difference thus between performance and com
plexity
– Complexity affects performance, but not v.v.
– Interested in how resource requirements scale as p
roblem gets larger
Level of analysis (Cont.)
26
Best, worse, average cases
• Hardly ever true that algorithm takes same time
on different instances of same size
• Choice: best, worse or average case analysis
• Best case analysis
– Time taken on best input of size n
• Worst case analysis
– Time taken on worst input of size n
• Average case analysis
– Time taken as average of time taken on inputs of siz
e n
27
Which to choose?
• Usually interested in worst case scenario:
– The most operations that might be made for some pro
blem size
• Worst case is only safe analysis – guaranteed upp
er bound (best case too optimistic)
• Average case analysis harder
– Usually have to assume some probability distribution
of the data, more sophisticated model
– E.g. if looking for a specific letter in a random list of
letters, might expect letter to appear 1/26 of time (for
English)
28
int Max(int A[n])
 returns Max value in array A
{
int i;
Max = A[0];
for (i = 1; i < n; i++)
if (Max < A[i])
Max = A[i];
return(Max);
}
• Worst case: THEN always executed
29
Scale, order vs exactness
• When looking at complexity, usually ignore exact
number of operations and cost of small, infrequent
ones – concentrate on size of input, i.e.
• Interested more in how number of operations relat
es to problem size – of big problems
30
Analysis example
// sum first n integers in array a
int sum(int a[], int n)
{
int sum = 0;
for (int j = 0; j < n; j++)
sum = sum + a[j];
return(sum);
}
31
General methodology
1. Characterize the size of the input
 Input is an array containing at least n integers
 Thus, size of input is n
2. Count how many operations (steps) are take
n in the algorithm for an input of size n
 1 step is an elementary operation
 +, <, a[j] (indexing into an array), =, …
32
Detail of analysis
int sum(int a[], int n) {
int sum = 0; 
for (int j = 0; j < n; j++ )
  
sum = sum + a[j];
  
return(sum); 
}
• 1,2,8: only happen once (so 3 such operations)
• 3,4,5,6,7: happen once for each iteration of loop (5 * n)
• Total operations: 5n + 3
• Complexity function: f(n) = 5n + 3
33
How does size affect running time?
• 5n + 3 is an estimate of running time for dif
ferent values of n
• As n grows, number of operations grows in
linear proportion to n, for this sum function
n Operations
10 53
100 503
1000 5003
1000000 5000003
34
Summary of methodology
• Count operations/steps taken by algorithm
• Use count to derive formula, based on size o
f problem n
– Another formula might be, e.g.: n2
– Or 2n or n2/2 or (n+1)2 or 7n2 + 12n + 4
• Use formula to help understand overall effic
iency
35
Usefulness?
• What if we kept doubling size of n?
n log2n 5n nlog2n n2 2n
8 3 40 24 64 256
16 4 80 64 256 65536
32 5 160 160 1024 ~109
64 6 320 384 4096 ~1019
128 7 640 896 16384 ~1038
256 8 1280 2048 65536 ~1076
10000 13 50000 105 108 ~103010
36
Analyzing Insertion Sort
37
The running time of the algorithm is the sum of running times for
each statement executed; a statement that takes ci steps to execute
and executes n times will contribute ci.n to the total running time.
To compute T(n), the running time of INSERTION-SORT on an i
nput of n values, we sum the products of the cost and times colum
ns, obtaining
Analyzing Insertion Sort
38
• Even for inputs of a given size, an algorithm’s running time may de
pend on which input of that size is given.
• The best case occurs if the array is already sorted. For each i=2, 3,4,
….n, we find that A[j]<= key in while loop when j has its initial val
ue of i-1. Thus for i=2,3,4,….n, and the best case running t
ime is
Analyzing Insertion Sort
1

i
t
• T(n) can be expressed as an+b for constant a and b. It is a linear
function of n.
39
• If the array is in reverse sorted order—that is, in decreasing order—t
he worst case results. We must compare each element A[j] with each el
ement in the entire sorted subarray A[1… j-1], and so tj = j for j=2,3…,
n. Noting that.
Analyzing Insertion Sort
40
Analyzing Insertion Sort
We find that in the worst case, the running time of INSER
TION-SORT is
• The worst case can be expressed as for some const
ant a,b,c . It is a quadratic function of n
c
bn
an 

2
41
Worst-case and average-case analysis
Analyzing Insertion Sort
In our analysis of insertion sort, we looked at both the best case, in wh
ich the input array was already sorted, and the worst case, in which th
e input array was reverse sorted. For the remainder of this book, thoug
h, we shall usually concentrate on finding only the worst-case runnin
g time, that is, the longest running time for any input of size n. We g
ive three reasons for this orientation.
1. The worst-case running time of an algorithm gives us an upper b
ound on the running time for any input. Knowing it provides a guar
antee that the algorithm will never take any longer. We need not ma
ke some educated guess about the running time and hope that it nev
er gets much worse.
42
Worst-case and average-case analysis
2. For some algorithms, the worst case occurs fairly often. For examp
le, in searching a database for a particular piece of information, the se
arching algorithm’s worst case will often occur when the information
is not present in the database. In some applications, searches for abse
nt information may be frequent.
3. The “average case” is often roughly as bad as the worst case. Sup
pose that we randomly choose n numbers and apply insertion sort. H
ow long does it take to determine where in subarray A[1.. j-1] to ins
ert element A[j]? On average, half the elements in A[1…j-1] are less
than A[j], and half the elements are greater. On average, therefore, w
e check half of the subarray A[1…j-1], and so tj is about j/2. The res
ulting average-case running time turns out to be a quadratic function
of the input size, just like the worst-case running time.
•The scope of average-case analysis is limited, because it may not be
apparent what constitutes an “average” input for a particular problem.
43
Asymptotic Notation
• The notations we use to describe the asymptotic runnin
g time of an algorithm are defined in terms of functions
whose domains are the set of natural numbers N={0,1,
2,…}. Such notations are convenient for describing the
worst-case running-time function T(n), which is usuall
y defined only on integer input sizes.
• Three basic asymptotic notations
1. O-notation (Big “oh”): When we have an asymptotic u
pper bound, we use O-notation. For a given function g(
n), we denote by O(g(n)) (pronounced “big-oh of g of
n” or sometimes just “oh of g of n”) the set of function
s
O(g(n)) ={f(n): there exist positive constants c and N s
uch that 0 <= f(n) <= c g(n) for all n>=N}
44
2. Ω-notation (Big “omega”): Ω-notation provides an asymptotic l
ower bound. For a given function g(n), we denote by Ω(g(n)) (p
ronounced “big-omega of g of n” or sometimes just “omega of g
of n”) the set of functions
Ω(g(n)) ={f(n): there exist positive constants c and N such that
0 <= c g(n) <= f(n) for all n>=N}.
3. Θ-notation (Big “theta”): Θ-notation asymptotically bounds a fu
nction from above and below. For a given function g(n), we den
ote by Θ(g(n)) (pronounced “big-theta of g of n” or sometimes j
ust “theta of g of n”) the set of functions
Θ(g(n)) ={f(n): there exist positive constants c1, c2 and N such t
hat 0 <= c1 g(n) <= f(n) <= c2 g(n) for all n>=N}.
Asymptotic Notation (cont.)
45
46
Big-O: 3n+2=O(n) as 3n+2<=4n for all n>=2.
100n+6=O(n) as 100n+6<=101n for all n>=6.
10n2 +4n+2=O(n2) as 10n2 +4n+2<=11n2 for all n>=5.
1000n2 +100n – 6 = O(n2) as 1000n2 +100n - 6<= 1001n2 for all n>
=100.
3n+2!=O(1) as 3n+2 is not less than or equal to c for any constant c
and all n>=N. 10n2 +4n+2!=O(n).
Big-Ω: 3n+2= Ω(n) as 3n+2>=3n for all n>=1.
100n+6= Ω(n) as 100n+6>=100n for all n>=1.
10n2 +4n+2= Ω(n2) as 10n2 +4n+2>=n2 for all n>=1. Also observe
that 3n+3= Ω(1) . 10n2 +4n+2= Ω(n). 10n2 +4n+2= Ω(1).
Big-Θ: 3n+2= Θ(n) as 3n+2>=3n for all n>=2 and 3n+2<=4n for al
l n>=2 so c1=3,c2=4 and N=2. 10n2 +4n+2= Θ(n2). 10logn+4=
Θ(logn) . 3n+2 != Θ(1). 3n+3!= Θ(n2). 10n2 +4n+2!= Θ(n).
10n2 +4n+2!= Θ(1).
Asymptotic Notation (cont.)
47
• For more example see the pages 29, 30, 31, 32, 33 of b
ook by Sahni)
• Home work : (i) Is 2n+1 =O(2n )? (ii) Is 22n =O(2n )?
Asymptotic Notation (cont.)
48
Some commonly found orders of growth
O(1) Constant (bounded
)
O(log n) Logarithmic
O(n) Linear
O(nlogn) Log linear
O(n2) Quadratic
O(n3) Cubic
O(2n) Exponential
O(n!) Exponential (Factor
ial)
O(nn) Exponential
increasing
complexity
polynomial
polynomial good, exponential bad
49
n
2n
50
Determining complexities
• Simple statements: O(1)
– Assignments of simple data types: int x = y;
– Arithmetic operations: x = 5 * y;
– Indexing into an array: array[i];
– (de)referencing pointers: listptr =list -> next;
– Declarations of simple data types: int x, y;
– Simple conditional tests: if (i < 12)
• But not e.g. if (SomeFunction() < 25)
51
If…then…else statements
• Worst case is slowest possibility
if (cond) {
sequence of statements 1 say this is O(n)
}
else {
sequence of statements 2 say this is O(1)
}
• Overall, this if…then…else is thus O(n)
52
Loops
• Two parts to loop analysis
– How many iterations?
– How many steps for each iteration?
int sum=0;
for (int j=0; j < n; j++)
sum = sum + j;
• Loop executes n times
• 4 = O(1) steps per iteration
• Total time is here n * O(1) = O(n * 1) = O(n
)
53
Loops: simple
int sum=0;
for (int j=0; j < 100; j++)
sum = sum + j;
• Loop executes 100 times
• 4 = O(1) steps per iteration
• Total time here is
100 * O(1) = O(100* 1) = O(1)
• Thus, faster than previous loop, for values u
p to 100
54
Loops: while loops
done=FALSE; n=22;
while (!done) {
result = result * n;
n--;
if (n == 1) done = TRUE;
}
• Loop terminates when done==TRUE
• This happens after n iterations
• O(1) time per iteration
• O(n) total time here
55
Loops: nested
for (i=0; i < n; i++) {
for (j=0; j<m; j++) {
sequence of statements
}
}
• Outer loop executes n times
• For every outer, inner executes m times
• Thus, inner loop total = n * m times
• Complexity is O(n*m)
• Where inner condition is j<n (a common ca
se), total complexity is O(n2)
56
Sequences of statements
• For a sequence, determine individual times
and add up
for (j=0; j<n; j++)
for (k=0; k<j; k++)
sum = sum + j*k;
for (l=0; l<n; l++)
sum = sum –1;
printf(“Sum is now %d”,sum);
• Total is: O(n2) + O(n) + O(1) = O(n2)
O(n2)
O(n)
O(1)
57
When do constants matter?
• Normally we drop constants and lower-order t
erms when calculating Big-O.
• This means that 2 algorithms could have the s
ame Big-O, although one may always be faste
r than the other
• In such cases, the constants do matter if we w
ant to tell which algorithm is actually faster
• However, they do not matter when we want to
know how algorithms scale under growth
• O(n2) always faster than 10n2 but for both, if p
roblem size doubles, time quadruples
58
Example 1
• Prove 7n2 + 12n is O(n2)
• Must show that 2 constants c and N exist su
ch that
7n2 + 12n  c n2 n  N
• Let N=1 (existence proof – can pick any N
we want to, only need to show one exists)
• Now, does there exist some c such that
7n2 + 12n  c n2 n > 1
59
• As N > 1, this means
7n2 + 12n < 7n2 + 12n2
add: 7n2 + 12n2 = 19n2
and observe: 19n2  19n2
• Thus, let c be 19 (to be sure, let us say 20)
• Checking: check for N > 1, so take N=2
• Is 7 * 22 + 12 * 2 < 20 * 22 ?
– Yes: 52 < 80
• Thus 7 * n2 + 12 * n  20 * n2 n  2
• Thus 7n2 + 12n  20n2 n  2
• Proved: 7n2 + 12n is O(n2)
60
Example 2
• Prove (n + 1)2 is O(n2)
• (n + 1)2 expands to n2 + 2n + 1
• n2 + 2n + 1  n2 + 2n2+ n2
• n2 + 2n2+ n2 = 4n2
• Thus, c is 4
• We can find N=1 as before
• Arrive at: (n + 1)2  4n2 n  1
• Thus, (n + 1)2 is O(n2)
61
More on Example 2
• Could also pick N=3 and c=2
• As (n + 1)2  2n2 n  3
• Because 2n + 1  n2 n  3
• However, cannot pick N=0
• Because (0+1)2 > c(0)2 for any c > 0
62
Example 3
• Prove 3n2 + 5 is O(n2)
• We choose constant c = 4 and N = 2
• Why? Because n  2:
3n2 + 5  4n2
Thus, by our definition, 3n2 + 5  O(n2)
• Now, prove 3n2 + 5 is not O(n) – easy:
• Whatever constant c and value N one chooses
, we can always find a value of n greater than
N such that 3n2 + 5 is greater than c n
63
Example 4
• Prove 2n+4 has order n i.e. 2n+4  O(n)
• Clearly, 2n = 2n n
• 4  n if n  4
• 2n + 4  3n n  4
• Thus, our constants are c = 3 and N = 4
• Re-express: 2n + 4  c n n  N
• Thus, by our definition
• 2n+4  O(n)
64
Example 5
• Prove 17n2 + 45n +46 is O(n2)
• 17n2  17n2 n
• 45n  n2 n  45
• 46  n2 n  7
• 17n2 + 45n +46  17n2 + n2 + n2
• So, 17n2 + 45n +46  19n2
• Thus c = 19 and N = 45
• Thus 17n2 + 45n +46  19n2 n  45
• Proved: 17n2 + 45n +46 is O(n2)

More Related Content

PPTX
Unit 1, ADA.pptx
jinkhatima
 
PPTX
Design & Analysis of Algorithm course .pptx
JeevaMCSEKIOT
 
PPTX
ANALYSIS AND DESIGN OF ALGORITHMS -M1-PPT
AIET
 
PDF
Introduction to Algorithms Complexity Analysis
Dr. Pankaj Agarwal
 
PPT
Data Structure and Algorithm chapter two, This material is for Data Structure...
bekidea
 
PDF
Design and Analysis Algorithms.pdf
HarshNagda5
 
PPTX
Chapter 1 Data structure.pptx
wondmhunegn
 
Unit 1, ADA.pptx
jinkhatima
 
Design & Analysis of Algorithm course .pptx
JeevaMCSEKIOT
 
ANALYSIS AND DESIGN OF ALGORITHMS -M1-PPT
AIET
 
Introduction to Algorithms Complexity Analysis
Dr. Pankaj Agarwal
 
Data Structure and Algorithm chapter two, This material is for Data Structure...
bekidea
 
Design and Analysis Algorithms.pdf
HarshNagda5
 
Chapter 1 Data structure.pptx
wondmhunegn
 

Similar to Segment_1_New computer algorithm for cse.pptx (20)

PPTX
Introduction to Data Structure and algorithm.pptx
esuEthopi
 
PDF
Data structures and algorithms Module-1.pdf
DukeCalvin
 
PDF
Algorithms Analysis.pdf
ShaistaRiaz4
 
PDF
Algorithm Analysis.pdf
NayanChandak1
 
PPTX
Data Structures - Lecture 1 [introduction]
Muhammad Hammad Waseem
 
PPT
chapter 1
yatheesha
 
PPTX
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
AntareepMajumder
 
PDF
introduction to analysis of algorithm in computer science
tissandavid
 
PDF
Introduction to analysis algorithm in computer Science
tissandavid
 
PPT
AOA Week 01.ppt
INAM352782
 
PPTX
design analysis of algorithmaa unit 1.pptx
rajesshs31r
 
PPTX
Introduction to Basics C Programming.pptx
Balaji Ganesh
 
PPTX
AoA Lec Design of algorithm spresentation
HamzaSadaat
 
PPT
Data Structure and Algorithms Department of Computer Science
donotreply20
 
PPTX
ADA_Module 1_MN.pptx- Analysis and design of Algorithms
madhu614742
 
PDF
Analysis of algorithm. big-oh notation.omega notation theta notation.performa...
AAGaikwad1
 
PPTX
Binary to hexadecimal algorithmic old.pptx
bulbul931579
 
PPTX
Analysis of Algorithm full version 2024.pptx
rajesshs31r
 
PPTX
Design Analysis of Alogorithm 1 ppt 2024.pptx
rajesshs31r
 
PPTX
Introduction to Algorithms Introduction to Algorithms.pptx
ArjayBalberan1
 
Introduction to Data Structure and algorithm.pptx
esuEthopi
 
Data structures and algorithms Module-1.pdf
DukeCalvin
 
Algorithms Analysis.pdf
ShaistaRiaz4
 
Algorithm Analysis.pdf
NayanChandak1
 
Data Structures - Lecture 1 [introduction]
Muhammad Hammad Waseem
 
chapter 1
yatheesha
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
AntareepMajumder
 
introduction to analysis of algorithm in computer science
tissandavid
 
Introduction to analysis algorithm in computer Science
tissandavid
 
AOA Week 01.ppt
INAM352782
 
design analysis of algorithmaa unit 1.pptx
rajesshs31r
 
Introduction to Basics C Programming.pptx
Balaji Ganesh
 
AoA Lec Design of algorithm spresentation
HamzaSadaat
 
Data Structure and Algorithms Department of Computer Science
donotreply20
 
ADA_Module 1_MN.pptx- Analysis and design of Algorithms
madhu614742
 
Analysis of algorithm. big-oh notation.omega notation theta notation.performa...
AAGaikwad1
 
Binary to hexadecimal algorithmic old.pptx
bulbul931579
 
Analysis of Algorithm full version 2024.pptx
rajesshs31r
 
Design Analysis of Alogorithm 1 ppt 2024.pptx
rajesshs31r
 
Introduction to Algorithms Introduction to Algorithms.pptx
ArjayBalberan1
 
Ad

Recently uploaded (20)

PPTX
NOI Hackathon - Summer Edition - GreenThumber.pptx
MartinaBurlando1
 
PPTX
Strengthening open access through collaboration: building connections with OP...
Jisc
 
PPTX
Measures_of_location_-_Averages_and__percentiles_by_DR SURYA K.pptx
Surya Ganesh
 
PPTX
Skill Development Program For Physiotherapy Students by SRY.pptx
Prof.Dr.Y.SHANTHOSHRAJA MPT Orthopedic., MSc Microbiology
 
PPTX
Congenital Hypothyroidism pptx
AneetaSharma15
 
PPTX
Five Point Someone – Chetan Bhagat | Book Summary & Analysis by Bhupesh Kushwaha
Bhupesh Kushwaha
 
PPTX
How to Manage Leads in Odoo 18 CRM - Odoo Slides
Celine George
 
PDF
1.Natural-Resources-and-Their-Use.ppt pdf /8th class social science Exploring...
Sandeep Swamy
 
PDF
UTS Health Student Promotional Representative_Position Description.pdf
Faculty of Health, University of Technology Sydney
 
PDF
Wings of Fire Book by Dr. A.P.J Abdul Kalam Full PDF
hetalvaishnav93
 
PDF
Review of Related Literature & Studies.pdf
Thelma Villaflores
 
PPT
Python Programming Unit II Control Statements.ppt
CUO VEERANAN VEERANAN
 
PPTX
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
PPTX
An introduction to Dialogue writing.pptx
drsiddhantnagine
 
PPTX
PPTs-The Rise of Empiresghhhhhhhh (1).pptx
academysrusti114
 
PDF
2.Reshaping-Indias-Political-Map.ppt/pdf/8th class social science Exploring S...
Sandeep Swamy
 
PDF
7.Particulate-Nature-of-Matter.ppt/8th class science curiosity/by k sandeep s...
Sandeep Swamy
 
PPTX
Understanding operators in c language.pptx
auteharshil95
 
PDF
Exploring-Forces 5.pdf/8th science curiosity/by sandeep swamy notes/ppt
Sandeep Swamy
 
DOCX
SAROCES Action-Plan FOR ARAL PROGRAM IN DEPED
Levenmartlacuna1
 
NOI Hackathon - Summer Edition - GreenThumber.pptx
MartinaBurlando1
 
Strengthening open access through collaboration: building connections with OP...
Jisc
 
Measures_of_location_-_Averages_and__percentiles_by_DR SURYA K.pptx
Surya Ganesh
 
Skill Development Program For Physiotherapy Students by SRY.pptx
Prof.Dr.Y.SHANTHOSHRAJA MPT Orthopedic., MSc Microbiology
 
Congenital Hypothyroidism pptx
AneetaSharma15
 
Five Point Someone – Chetan Bhagat | Book Summary & Analysis by Bhupesh Kushwaha
Bhupesh Kushwaha
 
How to Manage Leads in Odoo 18 CRM - Odoo Slides
Celine George
 
1.Natural-Resources-and-Their-Use.ppt pdf /8th class social science Exploring...
Sandeep Swamy
 
UTS Health Student Promotional Representative_Position Description.pdf
Faculty of Health, University of Technology Sydney
 
Wings of Fire Book by Dr. A.P.J Abdul Kalam Full PDF
hetalvaishnav93
 
Review of Related Literature & Studies.pdf
Thelma Villaflores
 
Python Programming Unit II Control Statements.ppt
CUO VEERANAN VEERANAN
 
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
An introduction to Dialogue writing.pptx
drsiddhantnagine
 
PPTs-The Rise of Empiresghhhhhhhh (1).pptx
academysrusti114
 
2.Reshaping-Indias-Political-Map.ppt/pdf/8th class social science Exploring S...
Sandeep Swamy
 
7.Particulate-Nature-of-Matter.ppt/8th class science curiosity/by k sandeep s...
Sandeep Swamy
 
Understanding operators in c language.pptx
auteharshil95
 
Exploring-Forces 5.pdf/8th science curiosity/by sandeep swamy notes/ppt
Sandeep Swamy
 
SAROCES Action-Plan FOR ARAL PROGRAM IN DEPED
Levenmartlacuna1
 
Ad

Segment_1_New computer algorithm for cse.pptx

  • 1. 1 Computer Algorithms Segment 1 References: 1. Introduction to Algorithms- Cormen 2. Computer Algorithms-Shanny 3. Algorithms-S. Dasgupta
  • 2. 2 Definition • Algorithm is considered as one of the fundamental concepts in c omputer science • Informally 1) An algorithm is a set of steps that define how a task is perform ed -Examples include algorithms for constructing model airplanes, for operating washing machines and for playing music. 2) An algorithm is any well-defined computational procedure tha t takes some value, or set of values, as input and produces some value or set of values as output. An algorithm is thus a sequence of computational steps that transform the input into the output. - Example of this type is a sorting problem (i.e. sort a sequence of numbers into non-decreasing order)
  • 3. What kinds of problems are solved by algorith ms? 3 Sorting is by no means the only computational problem for which algorithms have been developed. Practical applicati ons of algorithms are ubiquitous and include the following examples: • The Human Genome Project has made great progress toward th e goals of identifying all the 100,000 genes in human DNA, deter mining the sequences of the 3 billion chemical base pairs that ma ke up human DNA, storing this information in databases, and dev eloping tools for data analysis. Each of these steps requires sophis ticated algorithms.
  • 4. 4 What kinds of problems are solved by algorith ms? • The Internet enables people all around the world to quickly acces s and retrieve large amounts of information. With the aid of clever algorithms, sites on the Internet are able to manage and manipulate this large volume of data. Examples of problems that make essentia l use of algorithms include finding good routes on which the data w ill travel and using a search engine to quickly find pages on which particular information resides. • Electronic commerce enables goods and services to be negotiated a nd exchanged electronically, and it depends on the privacy of person al information such as credit card numbers, passwords, and bank stat ements. The core technologies used in electronic commerce include public-key cryptography and digital signatures, which are based on n umerical algorithms and number theory.
  • 5. 5 What kinds of problems are solved by algorith ms? • Manufacturing and other commercial enterprises often need to alloca te scarce resources in the most beneficial way. • An oil company may wish to know where to place its wells in order t o maximize its expected profit. • A political candidate may want to determine where to spend money b uying campaign advertising in order to maximize the chances of winni ng an election. • An airline may wish to assign crews to flights in the least expensive way possible, making sure that each flight is covered and that governm ent regulations regarding crew scheduling are met. • An Internet service provider may wish to determine where to place ad ditional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear progr amming,
  • 6. 6 What kinds of problems are solved by algorith ms? (Some Specific Problems) • We are given a road map on which the distance between each pair of adjacent intersections is marked, and we wish to determine the sh ortest route from one intersection to another. The number of possibl e routes can be huge, even if we disallow routes that cross over the mselves. How do we choose which of all possible routes is the short est? • We are given two ordered sequences of symbols, X =<x1, x2,…, x m> and Y=<y1, y2, …,yn>, and we wish to find a longest common subsequence of X and Y . A subsequence of X is just X with some ( or possibly all or none) of its elements removed. • We are given n points in the plane, and we wish to find the conve x hull of these points.
  • 7. 7 • Formally - An algorithm is an ordered set of unambiguous, exec utable steps, defining a terminating process • Question • In what sense do the steps described by the following li st of instructions fail to constitute an algorithm? Step 1. Take a coin out of your pocket and put it on the table Step 2. Return to Step 1 • A machine-compatible representation of an algorithm is cal led a program Definition
  • 8. 8 Criteria/Properties •All algorithms must satisfy the following criteria: 1. Input: Zero or more quantities are externally supplied. 2. Output: At least one quantity is produced. 3. Definiteness: Each instruction is clear and umabiguous. 4. Finiteness: If we trace out the instructions of an algorith m, then for all cases, the algorithm terminates after a finite number of steps. 5. Effectiveness: Every instruction must be very basic so t hat it can be carried out, in principle, by a person using on ly pencil and paper. It must be feasible. •Algorithms that are definite and effective are also called computational proced ures. One important example of computational procedures is the operating syste m of a digital computer.
  • 9. 9 Areas of Study • The study of algorithms includes many important and ac tive areas of research. There are four distinct areas of stu dy one can identify 1. How to devise algorithms: an understanding of the algorith mic structures and their representation in the form of Pseudocode or flowcharts. - algorithm discovery - problem solving approaches/techniques 2. How to validate algorithms: determining the correctness 3. How to analyze algorithms: determining the time and st orage of an algorithm requires. 4. How to test a program: debugging • Algorithm specification - Pseudocode: algorithms are represented with precisely d efined textural structure - Flowcharts
  • 10. 10 • The word algorithm comes from the name of a Persian author , Abu Ja’far Mohammad ibn Musa al Khowarizmi, who wrot e a text book on mathematics. • Began as a subject in mathematics. • Greek mathematician Euclid invented an algorithm for fi nding the greatest common divisor (gcd) of two positive integer (between 400 and 300 B.C.) - meaningful algorit hm ever discovered. • Weaving loom invented in 1801 by a Frenchman, Joseph Jacquard • Charles Babbage, English mathematician - the difference engine and the analytical engine, capable of executing al gorithms History
  • 11. 11 Algorithms as a technology • Suppose computers were infinitely fast and computer memory was fre e. Would you have any reason to study algorithms? • Of course, computers may be fast, but they are not infinitely fast. And memory may be inexpensive, but it is not free. Computing time is ther efore a bounded resource, and so is space in memory. You should use these resources wisely, and algorithms that are efficient in terms of ti me or space will help you do so. • Analysis of algorithm or performance analysis refers to the task of det ermining how mach computing time and storage an algorithm requires . Generally, by analyzing several candidate algorithm for a problem, a most efficient one can be easily identified.
  • 12. 12 Algorithms as a technology Efficiency • Different algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than differences due to hardware and sof tware. we will see two algorithms for sorting. The first, insertion sort , takes time roughly equal to to sort n items, where c1 is a constant that does not depend on n. The second, merge sor t, takes time roughly equal to , where c2 is another c onstant that also does not depend on n. Insertion sort typically has a smaller constant factor than merge sort, so that c1 < c2. L et us fit a faster computer (computerA) running insertion sort a gainst a slower computer (ComputerB) running merge sort. Ea ch computer sort an array of 10 million numbers. 2 1n c n n c 2 2 log
  • 13. 13 Suppose that computer A executes 10 billion instructions per sec ond and computer B executes only 10 million instructions per sec ond, so that computer A is 1000 times faster than computer B. Su ppose insertion sort solved in time and merge sort solved in time. Computer A takes 20,000 seconds (more than 5.5 hours) while c omputer B takes ≈1163 seconds (less than 20 minutes). Computer B runs more than 17 times faster than computer A! The example above shows that we should consider algorithms, like co mputer hardware, as a technology. Total system performance depends on choosing efficient algorithms as much as on choosing fast hardwar e. Just as rapid advances are being made in other computer technologi es, they are being made in algorithms as well. Algorithms as a technology 2 2n n n 2 log 50
  • 14. 14 Time needed by algorithms of diff time functions for diff problem sizes Assuming 1m operations/sec. Size of n n n2 n3 2n n! nn 1 0.000001 0.00000 1 0.00000 1 0.00000 2 0.000001 0.000001 5 0.000005 0.00002 5 0.00012 5 0.00003 2 0.000120 0.003125 10 0.00001 0.0001 0.001 0.00102 4 3.6288 2.778 hrs 20 0.00002 0.0004 0.008 1.04858 78218 yrs 3.371012 yrs 50 0.00005 0.0025 0.125 36.1979 yrs 9.771060 yrs 2.871070 yrs 100 0.0001 0.01 1.00 41017 y rs 310143 yr s 3.210183 yrs
  • 15. 15 Insertion Sort We start with insertion sort, which is an efficient algorithm f or sorting a small number of elements.
  • 18. 18 Loop invariants and the correctness of insertion sort Figure 2.2 shows how this algorithm works for A =<5; 2; 4; 6; 1; 3>. The index j indicates the “current card” being inserted in to the hand. At the beginning of each iteration of the for loop, which is indexed by j , the subarray consisting of elements A [1.. j-1] constitutes the currently sorted hand, and the remaining subarray A[j+1, n] corresponds to the pile of cards still on the t able. In fact, elements A[1…j-1] are the elements originally in positions 1 through j-1, but now in sorted order. We state these properties of A[1… j-1] formally as a loop invariant.
  • 19. 19 Loop invariants and the correctness of insertion sort We use loop invariants to help us understand why an algori thm is correct. We must show three things about a loop inv ariant: Initialization: It is true prior to the first iteration of the loop. Maintenance: If it is true before an iteration of the loop, it rem ains true before the next iteration. Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct. Let us see how these properties hold for insertion sort.
  • 20. 20 Correctness of insertion sort Initialization: We start by showing that the loop invariant holds befo re the first loop iteration, when j=2. The subarray A[1.. j-1], therefore , consists of just the single element A[1], which is in fact the original element in A[1]. Moreover, this subarray is sorted, which shows that the loop invariant holds prior to the first iteration of the loop. Maintenance: Next we tackle the second property: showing that eac h iteration maintains the loop invariant. Informally, the body of the fo r loop works by moving A[j-1], A[j -2], A[j-3], and so on by one posi tion to the right until it finds the proper position for A[j] (lines 4–7), at which point it inserts the value of A[j] (line 8). The subarray A[1 … j-1] then consists of the elements originally in A[1… j], but in sort ed order. Incrementing j for the next iteration of the for loop then pre serves the loop invariant.
  • 21. 21 Termination: Finally, we examine what happens when the loop ter minates. The condition causing the for loop to terminate is that j >A.length=n. Because each loop iteration increases j by 1, we mu st have j=n+1 at that time. Substituting n+1 for j in the wording of l oop invariant, we have that the subarray A[1… n] consists of the el ements originally in A[1…n], but in sorted order. Observing that th e subarray A[1…n] is the entire array, we conclude that the entire a rray is sorted. Hence, the algorithm is correct. Correctness of insertion sort
  • 22. 22 Pseudocode conventions Home work: See the page 19 & 20 Rewrite the INSERTION-SORT procedure to sort into noninc reasing instead of nondecreasing order.
  • 23. 23 Analyzing algorithms Analyzing an algorithm has come to mean predicting the reso urces that the algorithm requires. Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time that we want to measure. Generally, by analyzing several candidate algorithms for a prob lem, we can identify a most efficient one. Such analysis may indicate more than one viable candidate, but we can often discard several inferior algorithms in the process.
  • 24. 24 Level of analysis • Time taken by an algorithm? – Number of seconds a program takes to run? No… – Too machine dependent: not measuring the algorithm so much as the program, machine, implementation • Here, not interested in low level machine details – Want technology-independent answers • Interested rather in measuring – Which data take most space – Which operations take most time in a device-independ ent way
  • 25. 25 • Notion of elementary operation – 1 assignment, 1 addition, 1 comparison (test), … • Determine how many elementary operations al gorithm performs for inputs of different sizes ( problem sizes), or same size diff. order • Difference thus between performance and com plexity – Complexity affects performance, but not v.v. – Interested in how resource requirements scale as p roblem gets larger Level of analysis (Cont.)
  • 26. 26 Best, worse, average cases • Hardly ever true that algorithm takes same time on different instances of same size • Choice: best, worse or average case analysis • Best case analysis – Time taken on best input of size n • Worst case analysis – Time taken on worst input of size n • Average case analysis – Time taken as average of time taken on inputs of siz e n
  • 27. 27 Which to choose? • Usually interested in worst case scenario: – The most operations that might be made for some pro blem size • Worst case is only safe analysis – guaranteed upp er bound (best case too optimistic) • Average case analysis harder – Usually have to assume some probability distribution of the data, more sophisticated model – E.g. if looking for a specific letter in a random list of letters, might expect letter to appear 1/26 of time (for English)
  • 28. 28 int Max(int A[n]) returns Max value in array A { int i; Max = A[0]; for (i = 1; i < n; i++) if (Max < A[i]) Max = A[i]; return(Max); } • Worst case: THEN always executed
  • 29. 29 Scale, order vs exactness • When looking at complexity, usually ignore exact number of operations and cost of small, infrequent ones – concentrate on size of input, i.e. • Interested more in how number of operations relat es to problem size – of big problems
  • 30. 30 Analysis example // sum first n integers in array a int sum(int a[], int n) { int sum = 0; for (int j = 0; j < n; j++) sum = sum + a[j]; return(sum); }
  • 31. 31 General methodology 1. Characterize the size of the input  Input is an array containing at least n integers  Thus, size of input is n 2. Count how many operations (steps) are take n in the algorithm for an input of size n  1 step is an elementary operation  +, <, a[j] (indexing into an array), =, …
  • 32. 32 Detail of analysis int sum(int a[], int n) { int sum = 0;  for (int j = 0; j < n; j++ )    sum = sum + a[j];    return(sum);  } • 1,2,8: only happen once (so 3 such operations) • 3,4,5,6,7: happen once for each iteration of loop (5 * n) • Total operations: 5n + 3 • Complexity function: f(n) = 5n + 3
  • 33. 33 How does size affect running time? • 5n + 3 is an estimate of running time for dif ferent values of n • As n grows, number of operations grows in linear proportion to n, for this sum function n Operations 10 53 100 503 1000 5003 1000000 5000003
  • 34. 34 Summary of methodology • Count operations/steps taken by algorithm • Use count to derive formula, based on size o f problem n – Another formula might be, e.g.: n2 – Or 2n or n2/2 or (n+1)2 or 7n2 + 12n + 4 • Use formula to help understand overall effic iency
  • 35. 35 Usefulness? • What if we kept doubling size of n? n log2n 5n nlog2n n2 2n 8 3 40 24 64 256 16 4 80 64 256 65536 32 5 160 160 1024 ~109 64 6 320 384 4096 ~1019 128 7 640 896 16384 ~1038 256 8 1280 2048 65536 ~1076 10000 13 50000 105 108 ~103010
  • 37. 37 The running time of the algorithm is the sum of running times for each statement executed; a statement that takes ci steps to execute and executes n times will contribute ci.n to the total running time. To compute T(n), the running time of INSERTION-SORT on an i nput of n values, we sum the products of the cost and times colum ns, obtaining Analyzing Insertion Sort
  • 38. 38 • Even for inputs of a given size, an algorithm’s running time may de pend on which input of that size is given. • The best case occurs if the array is already sorted. For each i=2, 3,4, ….n, we find that A[j]<= key in while loop when j has its initial val ue of i-1. Thus for i=2,3,4,….n, and the best case running t ime is Analyzing Insertion Sort 1  i t • T(n) can be expressed as an+b for constant a and b. It is a linear function of n.
  • 39. 39 • If the array is in reverse sorted order—that is, in decreasing order—t he worst case results. We must compare each element A[j] with each el ement in the entire sorted subarray A[1… j-1], and so tj = j for j=2,3…, n. Noting that. Analyzing Insertion Sort
  • 40. 40 Analyzing Insertion Sort We find that in the worst case, the running time of INSER TION-SORT is • The worst case can be expressed as for some const ant a,b,c . It is a quadratic function of n c bn an   2
  • 41. 41 Worst-case and average-case analysis Analyzing Insertion Sort In our analysis of insertion sort, we looked at both the best case, in wh ich the input array was already sorted, and the worst case, in which th e input array was reverse sorted. For the remainder of this book, thoug h, we shall usually concentrate on finding only the worst-case runnin g time, that is, the longest running time for any input of size n. We g ive three reasons for this orientation. 1. The worst-case running time of an algorithm gives us an upper b ound on the running time for any input. Knowing it provides a guar antee that the algorithm will never take any longer. We need not ma ke some educated guess about the running time and hope that it nev er gets much worse.
  • 42. 42 Worst-case and average-case analysis 2. For some algorithms, the worst case occurs fairly often. For examp le, in searching a database for a particular piece of information, the se arching algorithm’s worst case will often occur when the information is not present in the database. In some applications, searches for abse nt information may be frequent. 3. The “average case” is often roughly as bad as the worst case. Sup pose that we randomly choose n numbers and apply insertion sort. H ow long does it take to determine where in subarray A[1.. j-1] to ins ert element A[j]? On average, half the elements in A[1…j-1] are less than A[j], and half the elements are greater. On average, therefore, w e check half of the subarray A[1…j-1], and so tj is about j/2. The res ulting average-case running time turns out to be a quadratic function of the input size, just like the worst-case running time. •The scope of average-case analysis is limited, because it may not be apparent what constitutes an “average” input for a particular problem.
  • 43. 43 Asymptotic Notation • The notations we use to describe the asymptotic runnin g time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N={0,1, 2,…}. Such notations are convenient for describing the worst-case running-time function T(n), which is usuall y defined only on integer input sizes. • Three basic asymptotic notations 1. O-notation (Big “oh”): When we have an asymptotic u pper bound, we use O-notation. For a given function g( n), we denote by O(g(n)) (pronounced “big-oh of g of n” or sometimes just “oh of g of n”) the set of function s O(g(n)) ={f(n): there exist positive constants c and N s uch that 0 <= f(n) <= c g(n) for all n>=N}
  • 44. 44 2. Ω-notation (Big “omega”): Ω-notation provides an asymptotic l ower bound. For a given function g(n), we denote by Ω(g(n)) (p ronounced “big-omega of g of n” or sometimes just “omega of g of n”) the set of functions Ω(g(n)) ={f(n): there exist positive constants c and N such that 0 <= c g(n) <= f(n) for all n>=N}. 3. Θ-notation (Big “theta”): Θ-notation asymptotically bounds a fu nction from above and below. For a given function g(n), we den ote by Θ(g(n)) (pronounced “big-theta of g of n” or sometimes j ust “theta of g of n”) the set of functions Θ(g(n)) ={f(n): there exist positive constants c1, c2 and N such t hat 0 <= c1 g(n) <= f(n) <= c2 g(n) for all n>=N}. Asymptotic Notation (cont.)
  • 45. 45
  • 46. 46 Big-O: 3n+2=O(n) as 3n+2<=4n for all n>=2. 100n+6=O(n) as 100n+6<=101n for all n>=6. 10n2 +4n+2=O(n2) as 10n2 +4n+2<=11n2 for all n>=5. 1000n2 +100n – 6 = O(n2) as 1000n2 +100n - 6<= 1001n2 for all n> =100. 3n+2!=O(1) as 3n+2 is not less than or equal to c for any constant c and all n>=N. 10n2 +4n+2!=O(n). Big-Ω: 3n+2= Ω(n) as 3n+2>=3n for all n>=1. 100n+6= Ω(n) as 100n+6>=100n for all n>=1. 10n2 +4n+2= Ω(n2) as 10n2 +4n+2>=n2 for all n>=1. Also observe that 3n+3= Ω(1) . 10n2 +4n+2= Ω(n). 10n2 +4n+2= Ω(1). Big-Θ: 3n+2= Θ(n) as 3n+2>=3n for all n>=2 and 3n+2<=4n for al l n>=2 so c1=3,c2=4 and N=2. 10n2 +4n+2= Θ(n2). 10logn+4= Θ(logn) . 3n+2 != Θ(1). 3n+3!= Θ(n2). 10n2 +4n+2!= Θ(n). 10n2 +4n+2!= Θ(1). Asymptotic Notation (cont.)
  • 47. 47 • For more example see the pages 29, 30, 31, 32, 33 of b ook by Sahni) • Home work : (i) Is 2n+1 =O(2n )? (ii) Is 22n =O(2n )? Asymptotic Notation (cont.)
  • 48. 48 Some commonly found orders of growth O(1) Constant (bounded ) O(log n) Logarithmic O(n) Linear O(nlogn) Log linear O(n2) Quadratic O(n3) Cubic O(2n) Exponential O(n!) Exponential (Factor ial) O(nn) Exponential increasing complexity polynomial polynomial good, exponential bad
  • 50. 50 Determining complexities • Simple statements: O(1) – Assignments of simple data types: int x = y; – Arithmetic operations: x = 5 * y; – Indexing into an array: array[i]; – (de)referencing pointers: listptr =list -> next; – Declarations of simple data types: int x, y; – Simple conditional tests: if (i < 12) • But not e.g. if (SomeFunction() < 25)
  • 51. 51 If…then…else statements • Worst case is slowest possibility if (cond) { sequence of statements 1 say this is O(n) } else { sequence of statements 2 say this is O(1) } • Overall, this if…then…else is thus O(n)
  • 52. 52 Loops • Two parts to loop analysis – How many iterations? – How many steps for each iteration? int sum=0; for (int j=0; j < n; j++) sum = sum + j; • Loop executes n times • 4 = O(1) steps per iteration • Total time is here n * O(1) = O(n * 1) = O(n )
  • 53. 53 Loops: simple int sum=0; for (int j=0; j < 100; j++) sum = sum + j; • Loop executes 100 times • 4 = O(1) steps per iteration • Total time here is 100 * O(1) = O(100* 1) = O(1) • Thus, faster than previous loop, for values u p to 100
  • 54. 54 Loops: while loops done=FALSE; n=22; while (!done) { result = result * n; n--; if (n == 1) done = TRUE; } • Loop terminates when done==TRUE • This happens after n iterations • O(1) time per iteration • O(n) total time here
  • 55. 55 Loops: nested for (i=0; i < n; i++) { for (j=0; j<m; j++) { sequence of statements } } • Outer loop executes n times • For every outer, inner executes m times • Thus, inner loop total = n * m times • Complexity is O(n*m) • Where inner condition is j<n (a common ca se), total complexity is O(n2)
  • 56. 56 Sequences of statements • For a sequence, determine individual times and add up for (j=0; j<n; j++) for (k=0; k<j; k++) sum = sum + j*k; for (l=0; l<n; l++) sum = sum –1; printf(“Sum is now %d”,sum); • Total is: O(n2) + O(n) + O(1) = O(n2) O(n2) O(n) O(1)
  • 57. 57 When do constants matter? • Normally we drop constants and lower-order t erms when calculating Big-O. • This means that 2 algorithms could have the s ame Big-O, although one may always be faste r than the other • In such cases, the constants do matter if we w ant to tell which algorithm is actually faster • However, they do not matter when we want to know how algorithms scale under growth • O(n2) always faster than 10n2 but for both, if p roblem size doubles, time quadruples
  • 58. 58 Example 1 • Prove 7n2 + 12n is O(n2) • Must show that 2 constants c and N exist su ch that 7n2 + 12n  c n2 n  N • Let N=1 (existence proof – can pick any N we want to, only need to show one exists) • Now, does there exist some c such that 7n2 + 12n  c n2 n > 1
  • 59. 59 • As N > 1, this means 7n2 + 12n < 7n2 + 12n2 add: 7n2 + 12n2 = 19n2 and observe: 19n2  19n2 • Thus, let c be 19 (to be sure, let us say 20) • Checking: check for N > 1, so take N=2 • Is 7 * 22 + 12 * 2 < 20 * 22 ? – Yes: 52 < 80 • Thus 7 * n2 + 12 * n  20 * n2 n  2 • Thus 7n2 + 12n  20n2 n  2 • Proved: 7n2 + 12n is O(n2)
  • 60. 60 Example 2 • Prove (n + 1)2 is O(n2) • (n + 1)2 expands to n2 + 2n + 1 • n2 + 2n + 1  n2 + 2n2+ n2 • n2 + 2n2+ n2 = 4n2 • Thus, c is 4 • We can find N=1 as before • Arrive at: (n + 1)2  4n2 n  1 • Thus, (n + 1)2 is O(n2)
  • 61. 61 More on Example 2 • Could also pick N=3 and c=2 • As (n + 1)2  2n2 n  3 • Because 2n + 1  n2 n  3 • However, cannot pick N=0 • Because (0+1)2 > c(0)2 for any c > 0
  • 62. 62 Example 3 • Prove 3n2 + 5 is O(n2) • We choose constant c = 4 and N = 2 • Why? Because n  2: 3n2 + 5  4n2 Thus, by our definition, 3n2 + 5  O(n2) • Now, prove 3n2 + 5 is not O(n) – easy: • Whatever constant c and value N one chooses , we can always find a value of n greater than N such that 3n2 + 5 is greater than c n
  • 63. 63 Example 4 • Prove 2n+4 has order n i.e. 2n+4  O(n) • Clearly, 2n = 2n n • 4  n if n  4 • 2n + 4  3n n  4 • Thus, our constants are c = 3 and N = 4 • Re-express: 2n + 4  c n n  N • Thus, by our definition • 2n+4  O(n)
  • 64. 64 Example 5 • Prove 17n2 + 45n +46 is O(n2) • 17n2  17n2 n • 45n  n2 n  45 • 46  n2 n  7 • 17n2 + 45n +46  17n2 + n2 + n2 • So, 17n2 + 45n +46  19n2 • Thus c = 19 and N = 45 • Thus 17n2 + 45n +46  19n2 n  45 • Proved: 17n2 + 45n +46 is O(n2)