0% found this document useful (0 votes)
90 views74 pages

1-Complexity Analysis Basics-23-24

1) Complexity analysis determines the time and space resources required by an algorithm based on input size, independent of machine or language. 2) Time complexity evaluates execution time, while space complexity evaluates memory needed. Asymptotic complexity analyzes behavior as input grows indefinitely. 3) Big-O notation describes asymptotic upper bounds, while Ω describes lower bounds and Θ describes tight bounds. Common complexities include constant O(1), linear O(n), quadratic O(n^2), and exponential O(2^n).

Uploaded by

walid annad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views74 pages

1-Complexity Analysis Basics-23-24

1) Complexity analysis determines the time and space resources required by an algorithm based on input size, independent of machine or language. 2) Time complexity evaluates execution time, while space complexity evaluates memory needed. Asymptotic complexity analyzes behavior as input grows indefinitely. 3) Big-O notation describes asymptotic upper bounds, while Ω describes lower bounds and Θ describes tight bounds. Common complexities include constant O(1), linear O(n), quadratic O(n^2), and exponential O(2^n).

Uploaded by

walid annad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Advanced algorithms and complexity

Chapter 1
Complexity analysis
Basics
Complexity analysis
Complexity Analysis determines the
amount of time and space (memory)
resources required to execute an
algorithm with respect to input size
(independent from the machine, language and
compiler).

2
Algorithm analysis

Size n
Basic operations

Input Results
Algorithm

Time Resources Space

3
Time and Space Complexity

Evaluating the execution time of an algorithm

 Time complexity

Evaluating how much space (memory) is


needed for the execution of an algorithm

 Space complexity

4
Algorithm analysis steps
Step 1 : Let P a problem, this is the problem statement
Step 2 : Let M be a method, to resolve the problem P and
develop an algorithm:
- Input
- Output
- Resolution steps
It’s the description of M in an algorithmic language
Step 3 : Validation,
To prove the correctness and termination of a program.
« Program validation »
Step 4: measure the efficiency of the algorithm independently of the
environment (machine, system, compiler, …)
« Calculating theoretical complexity »

5
Theoretical complexity

The theoretical complexity of an algorithm is calculated by


Evaluating the number of basic operations
(assignment, comparison, loop, …)
As a function of n the n lthe size of the input data

Notation : n: size of data


T(n): number of basic operations
0bjectives of complexity calculation:

• Being able to predict the execution time of an algorithm;


• Being able to compare two algorithms performing the same
processing.
Evaluation of an algorithm
Given two algorithms which calculate the solutions to the
same problem. How to compare these algorithms?

In other words, which is better?

Intuition:

We will prefer the one that requires the least resources.

• Execution time resource;

• Storage space resource;


Time complexity

• Theoretical approach
• Experimental approach

8
Time complexity
Theoretical approach
• General behavior of the algorithm;
• The details don't matter;
• The behavior of the algorithm is
the order of a function.

9
Time complexity
Experimental approach:
• Obtain a much finer evaluation of the algorithms
• All instructions are taken into account
• Provides fine indications
• Detection of sensitive areas
• Helps in program optimization

10
Asymptotic complexity
Calculation of theoretical complexity

Algorithm data are generally large and we


are mainly concerned with the growth of this
complexity as a function of the size of the
data.We will therefore consider the behavior
of complexity at infinity.
The behavior of an algorithm when the size of
the data becomes large is called:

Asymptotic Complexity

13
Function’s order (or rate of growth) 1, log n, n, n2, 2n

2n
n2 n
T(n)

Log n

14
Theoretical complexity
Asymptotic:
- n (size of data) large
- Approximate complexity
- We retain the largest term of the formula
- We ignore the multiplier coefficient (constant).

15
Complexity cases

•Best-case

•Average-case

•Worst-case

16
Example : if T(n)= 2n
We assume that the execution time of a basic instruction is
10-6 seconds

for n=60 T(n)=1,15 x 1012 seconds

1,15 x 1012 seconds = 36.558 years


and if T(n)=n ! we have n !>> 2n
17
Classification of Growth : Landau notation
•Big Oh Ο(f)
•Little oh o(f)
•Big omega Ω(f)
•Little omega ω (f)
•Big theta Θ(f)

18
Ο (big Oh) Ω (big omega) Θ (big theta)
Landau notation
• Big Oh :
O(g(n))={f:IN →IN| c>0 and n0 ≥ 0/0 ≤ f(n) ≤ c.g(n) n ≥n0 }

• g(n) is the asymptotic upper bound for f(n); we note it:


f(n)= O(g(n)).

20
Big Oh
• f(n) is of the order O(g(n))
O(g(n))={ f :IN →IN | c>0 and n0 ≥0 such as 0 ≤ f(n) ≤ c.g(n) n ≥ n0 }
Example:
𝑓 𝑛 = 2𝑛 + 5 𝑖𝑠 𝑜𝑓 𝑜𝑟𝑑𝑒𝑟 𝑶 𝒏
2𝑛 + 5 ≤ 𝑐. 𝑛
2𝑛 + 5
≤𝑐
𝑛
5 5
2+ ≤ 𝑐 𝑤𝑒 ℎ𝑎𝑣𝑒 ≤ 1 ∀𝑛 ≥ 5
𝑛 𝑛

𝑓𝑟𝑜𝑚 𝑤ℎ𝑖𝑐ℎ 𝑐 = 3 𝑎𝑛𝑑 𝑛0 = 5


Notation O
Example : f(n) = 2n+5 is of order O(n)
because 2n+5 ≤3n  n ≥5
The graph below illustrates the example:

22
Landau notation
• Big Ω
Ω(g(n )) = { f:IN → IN| c>0 and n0 ≥ 0 / 0 ≤ c.g(n) ≤ f(n)  n ≥ n0 }

• g(n) is the asymptotic lower bound for f(n); we note it:

f(n)= Ω(g(n)).

23
Notation Ω
• f(n) is of order Ω(g(n )) for all
Ω(g(n ))={ f :IN → IN | c>0 and n0 ≥0 such as 0 ≤ c.g(n) ≤ f(n) n ≥ n0 }

Example:
𝑓 𝑛 = 2𝑛 + 5 is of order Ω 𝒏

𝑏𝑒𝑐𝑎𝑢𝑠𝑒 2𝑛 + 5 ≥ 2𝑛 ∀𝑛 ≥ 0

𝑓𝑟𝑜𝑚 𝑤ℎ𝑖𝑐ℎ 𝑐 = 2 𝑎𝑛𝑑 𝑛0 = 0


Landau notation
• Big Θ
Θ(g(n)) = { f:IN →IN | c1>0, c2>0 and n0 ≥0 such as 0 ≤ c1.g(n) ≤ f(n) ≤ c2.g(n) n≥ n0 }

g(n) is an an asymptoti bound for f(n), we note it:


f(n) = Θ(g(n))

25
Notation Θ
• f(n) is of order Θ(g(n ))
Θ(g(n)) = { f :IN →IN |  c1>0, c2>0 and n0 ≥0 such as
0 ≤ c1.g(n) ≤ f(n) ≤ c2.g(n)  n≥ n0 } g(n) asymptotic bound
Example:
𝑓 𝑛 = 2𝑛 + 5 is of order Θ 𝒏

𝑏𝑒𝑐𝑎𝑢𝑠𝑒 2𝑛 ≤ 2𝑛 + 5 ≤ 3𝑛 ∀𝑛 ≥ 5

𝑓𝑟𝑜𝑚 𝑤ℎ𝑖𝑐ℎ 𝑐1 = 2, 𝐶2 = 3 𝑎𝑛𝑑 𝑛0 = 5


Landau notation

Note:
Notations o and ω with strict inequalities.
Note : we can also determine the complexity
using the limits as follows:
𝑓(𝑛)
• 𝑖𝑓 lim = 𝑘 𝑘 > 0 𝑡ℎ𝑒𝑛 𝑓 𝑛 ∈ Θ(𝑔(𝑛))
𝑛→∞ 𝑔(𝑛)
𝑓(𝑛)
• 𝑖𝑓 lim =0 𝑡ℎ𝑒𝑛 𝑓 𝑛 ∈ O(𝑔(𝑛))
𝑛→∞ 𝑔(𝑛)
𝑓(𝑛)
• 𝑖𝑓 lim = +∞ 𝑡ℎ𝑒𝑛 𝑓 𝑛 ∈ Ω(𝑔(𝑛))
𝑛→∞ 𝑔(𝑛)
Landau notation
Exercise: Show that:
f(n)=5n2 - 6n = Θ(n2)

f(n)=6n3 ≠ Θ(n2)

f(n)=n2 = O(10-5 n3)

3 2 3
f n = 10n + 3n + 5n +1=O(n )
Landau notation
Exercise: Show that :
f(n)=5n2 - 6n = Θ(n2)
𝑐2 ∗ 𝑛2 ≤ 5𝑛2 − 6𝑛 ≤ 𝑐1 *𝑛2
6
𝑐2 ≤ 5 − ≤ 𝑐1
𝑛
6
≤ 1 ∀𝑛 ≥ 6
𝑛
𝑐1 = 5, 𝑐2 = 1, 𝑛0 = 6
(𝑜𝑟 𝑛0 = 2)
Landau notation
Exercise: Show that:
f(n)=6n3 ≠ Θ(n2)
6𝑛3 ≤ 𝑐1 ∗ 𝑛2
6𝑛 ≤ 𝑐1 ⇒ 𝑐1 does not exist

So complexity is false
Landau notation
Exercise: Show that: f(n)=n2 = O(10-5 n3)

𝑛2 ≤ 𝑐 ∗ 10−5 ∗ 𝑛3
𝑛2
−5 3
≤𝑐
10 ∗ 𝑛
1
−5
≤𝑐
10 ∗ 𝑛
10+5
≤𝑐
𝑛
10+5
≤ 1 ∀𝑛 ≥ 10+5
𝑛
so c=1 and 𝑛0 = 10+5
Landau notation
Exercise: Show that:
f n = 10n3 + 3n2 + 5n +1=O(n3 )
10n3 + 3n2 + 5n + 1 ≤ 𝑐 ∗ n3
10n3 + 3n2 + 5n + 1
3
≤𝑐
n
3 5 1
10 + + 2 + 3 ≤ 𝑐
𝑛 𝑛 𝑛
3 5 1
≤ 1 ∀𝑛 ≥ 3 ; ≤ 1 ∀𝑛 ≥ 3 ; ≤ 1 ∀𝑛 ≥ 1
𝑛 𝑛2 𝑛3
10 + 1 + 1 + 1 ≤ 𝑐
so 𝑛0 = 3 and c=13
Vocabulary
• O(1) : constant time, independant of size of data.
• This is the most optimal case. (Solving a quadratic equation)
• O(n) : linear (searching a value in an array)
• O(n2) : quadratic (sorting array elements)
• O(n3) : cubic (product of two matrices)
• O(nk) : polynomial complexity,
• O(log n), O(n log n ),… logarithmic
• O(2n), O(n !), … : exponential, factorial, …
34
Properties
• Reflexive : f(n) = O(f(n)).
• Transitive : if f(n) = O(g(n)) and g(n)=O(h(n)) then f(n)=O(h(n)).
• Addition :
 if f(n) = O(g1(n)) et h(n) = O(g2(n)) then f(n)+h(n)=O(g1(n)+g2(n)).
 c+ f(n) = O(c+g(n))=O(g(n)) with c a constant
• Multiplication :
 if f(n) = O(g1(n)) and h(n) = O(g2(n)) then f(n)xh(n) = O(g1(n)x g2(n)) ;
c.f(n) = O(c.g(n))=O(g(n)), because all constant function is O(1).

Note : We have the same properties with Ω and Θ.

35
Properties
We can easily deduce the following properties:

• If p(n) is a polynomial of degree k then p(n)=Θ(nk).


• Logb n = Θ(logb n)= Θ(log n). We can therefore say
that an algorithm has log n complexity without having
to specify the base
• log(n+c) = Θ(log n) for any base.
• (n+c)k = O(nk).

36
Summary
Algorithms are compared on the basis of their
complexity.
The exponential function is always stronger than
the polynomial function which is stronger than the
linear function which is stronger than the
logarithmic function.

37
Estimating the complexity
of an iterative algorithm
How to evaluate the execution time of an algorithm?

This involves counting the number of basic operations


carried out during execution.
A basic operation is an operation which is carried out
in constant time on all usual computers such as:
• Input-Output
• Assignments
• Comparisons
• Arithmetic and logical operations
To evaluate the number of basic operations we rely
on the data description parameters
• number of elements in an array;
• number of characters in a string;
• number of elements in a set;
• depth and width of a tree;
• number of vertices and edges of a graph; …
Estimating the cost of an iterative algorithm

Assignment, reading, writing are measured by: Ο(1)

41
Estimating the cost of an iterative algorithm

Sum of successive statements:

Statement_1 = cost_1
Statement_2 = cost_2

Statement_i = cost_i

Total_cost = cost_1+cost_2+…cost_i+…

42
Estimating the cost of an iterative algorithm
Conditional statement :
f(…)
{ if (…) then g(…) ;
else h(…) ;
end if ;
}

Cost of f(…)=MAX (cost of g(…), cost of h(…))

43
Estimating the cost of an iterative algorithm

Iterative statement « for » (Loop for):


f(…, n :int)
{ for i :=1 to n do
g(…) ;
}
Cost of f(…,n)= n x cost of g(…)
If g(…) depends on the value of i then:
𝑛

𝑪𝒐𝒔𝒕 𝒐𝒇 𝒇 … , 𝒏 = 𝑐𝑜𝑠𝑡(𝒈 … , 𝒊 )
𝑖=1

44
Estimating the cost of an iterative algorithm

Iterative statement « while » :


f(…, n :integer)
{ while <condition> do
statement g(…) ;
}
Cost of f(…,n)= 𝑖=1 𝑐𝑜𝑠𝑡(𝑔 𝑛 )
𝑘

45
Rules of O notation
Constant factors:

1. Any constant is simplified to 1.


Example: O(5)~ O(1)

2. Multiplicative constants are omitted.


Example: O(5n)~ O(n)

46
Rules of O notation

• Addition rule :
O(h(n)+g(n))=O(max(h(n),g(n)))
• Multiplication rule:

O((h(n)xh(n))) if h(n)>g(n)
O(h(n)xg(n)) =
O((g(n)xg(n))) if g(n)>h(n)

47
Estimating the cost of an algorithm

Example:
x
Calculate e using a limited sums of n terms:

2 n
x x x
S  e  1    ... 
x

1! 2! n!
What is the order of growth of the execution time?

48
Estimating the cost of an algorithm
1st solution:
s=1;
for i=1 to n do
p=1;
for j=1 to i do
p=p*x/j;
done;
s=s+p
done;

49
Estimating the cost of an algorithm
The internal loop is executed:
𝑛
𝑛(𝑛 + 1)
1 + 2 + ⋯+ 𝑛 = 𝑖=
2
𝑖=1

This solution has a complexity of order :


𝟐
𝑶(𝒏 )

50
Estimating the cost of an algorithm
2nd solution: i i 1
x x x
 *
i! i (i  1)!
S=1; p=1;
for i=1 to n do
p=p*x/i;
s=s+p;
done;

51
Estimating the cost of an algorithm

Here the loop statement is executed n times


so the complexity is of order :

𝑂(𝑛)

52
Calculating the complexity of an algorithm

Exercises
1. Calculation the maximum of 4 integers
Algorithm1 :
Function maximum(I/ a, b, c, d:integer):integer;
{ max:integer;
maxa ;
if (b>max) then maxb fsi;
if (c>max) then maxc fsi;
if (d>max) then maxd fsi;
return max;
}
Number of comparisons = 3
Complexity = O(1)
1. Calculation the maximum of 4 integers
Algorithm 2 :
Function maximum(I/ a, b, c, d:integer):integer;
{ if (a>b) then
if (a>c) then
if (a>d) then return a
else return d
end if;
else if (c>d) then return c
else return d
end if;
end if
else if (b>c) then
if (b>d) then return b
else return d
end if;
else if (c>d) then return c
else return d
end if;
end if;
end if;
}
1. Calculation the maximum of 4 integers
Complexity = O(1)
Same number of comparisons for the 2 solutions but the
1st solution takes up more memory space
2. Calculation of the maximum of n integers
Algorithm:
Function max_n(I/ t:Array[n] of integers;n:integer):integer;
{ max, i: integer;
maxt[1] ; ------------------------------------- 1 +
for i2 to n do ----------------- n-2+1= (n-1 *
if (t[i]>max) then maxt[i] endif; -------- 2) +
done;
return max; ----------------------------------- 1
cost = 1+2n-2+1 = 2n
}
Complexity=O(n)
3. Trace of a square matrix
This involves calculating the sum of the elements of the
diagonal of a square matrix A of integers (n: number of rows and
columns).
1st solution:
sum=0;
for i1 to n do
for j1 to n do
if (i=j) then sumsum+A[i][j]; end if;
done;
done;
return sum;
3. Trace of a square matrix
Complexity:

𝑛 𝑛 𝑛

1= 𝑛 = 𝑛2 ⇒ 𝜣(𝒏𝟐 )
𝑖=1 𝑗=1 𝑖=1
3. Trace of a square matrix
• 2nd solution:
sum=0;
for i1 to n do
sumsum+A[i][i];
done;
Return sum;

𝑛
Complexity: 𝑖=1 1 = 𝑛 ⇒ 𝜣(𝒏)
4. Give the Big O complexity of the following algorithms.

Algorithm 1 Algorithm 2
1 for i 1 to n do 1 for i 5 to n-5 do
2 for j 1 to i do 2 for j i-5 to i+5 do
3 xx+3 3 x x+3
Algorithm 1:

𝑛 𝑖 𝑛
𝑛(𝑛 + 1)
1= 𝑖= ↝ 𝑂(𝑛2 )
2
𝑖=1 𝑗=1 𝑖=1
Algorithm 2:

𝑛−5 𝑖+5 𝑛−5

1= 11 = 11 𝑛 − 9 ~𝑂(𝑛)
𝑖=5 𝑗=𝑖−5 𝑖=5
4. Give the Big O complexity of the following algorithms.

Algorithm 3 Algorithm 4
1 for i 1 to n do 1 k←0
2 for j 1 to i do 2 for i ← 1 ; i ≤ n ; i ← i*2
3 for k 1 to j do 3 for j ← 1 ; j ≤ n ; j ← j +1
4 x  x+a 4 k ← k +1
Algorithm 3:
𝑛 𝑖 𝑗 𝑛 𝑖 𝑛 𝑛 𝑛
𝑖(𝑖 + 1) 1 1 2𝑛 + 1 𝑛 + 1 𝑛 𝑛(𝑛 + 1)
1= 𝑗= = ( 𝑖2 + 𝑖) = ( + )
2 2 2 6 2
𝑖=1 𝑗=1 𝑘=1 𝑖=1 𝑗=1 𝑖=1 𝑖=1 𝑖=1

3
↝ 𝑂(𝑛 )
Algorithm 4:
i=1 = 20 j=1 to n
i=2 = 21 j=1 to n
i=4 = 22 j=1 to n
i=8 = 23 j=1 to n
… …
i=2k j=1 à n 𝑤𝑒 𝑎𝑠𝑠𝑢𝑚𝑒 𝑡ℎ𝑎𝑡 𝑛 = 2𝑘 ⇒ 𝑙𝑜𝑔𝑛 = 𝑙𝑜𝑔2𝑘 = 𝑘𝑙𝑜𝑔2 ;
with log 2 2 = 1 ⇒ 𝑘 = 𝑙𝑜𝑔𝑛
⇒ 𝒄𝒐𝒎𝒑𝒍𝒆𝒙𝒊𝒕𝒚 = 𝑶(𝒏𝒍𝒐𝒈𝒏)
The loop value of the second line doubles each time until it reaches n. It is in log(n).
The loop value of the third line is directly n (by upper bound – lower bound + 1).
We therefore have an algorithm in O(n.log(n))
5. Given an array of n integers, we want to know if
one of its elements is equal to the sum of the n-1 other
elements.

Example A= 3 8 -6 3 -1 9
• The 2nd element with value 8 is equal to the sum of
the other 5.
1st question
1/ A first method consists of calculating the sum
of the other elements for each element of the
array and checking whether it is equal to the
element. Write this algorithm. Give its
complexity.
2nd question

2/ We can improve, in terms of complexity, the


previous algorithm.Write this second algorithm
and give its complexity.
Algorithm1:
for i 1 to n do
s :=0 ;
for j 1 to n do
if (i≠j) then s  s+A[j] ; endif ;
done
if (s=A[i]) then return true endif ;
done
return false ;

Best-case : A[1]= 𝑛𝑗=2 𝐴 𝑗 𝑠𝑜 𝑐𝑜𝑠𝑡 = 𝑛 + 1 (lower bound ⇒ Ω(n+1) ≈ Ω(n)


Worst (case the elt doesn’t exist
so cost=n2 (upper bound) ⇒ O(n2) and as the lower bound ≠upper bound so il is not Θ.
We say that:
The complexity is of order O(n2)
Algorithm 2:
S:=0 ;
for i 1 to n do SS+A[i] ; done ;
for i 1 to n do
if (A[i]=S-A[i]) then return true ; endif ;
done ;
return false ;

O(n+n)=O(2n) ≈ O(n) (upper bound)


Ω(n+1) ≈Ω(n) (lower bound) A[1]= 𝑛𝑖=1 𝐴 𝑖 − 𝐴[1]
upper bound = lower bound so :
The complexity is of order Θ(n)
6. Give the complexity of the following algorithms
(justify)

Algorithm 1
for i ←1 to n do
if (i%2=0) then print (" Good morning ")
else for j ←1 to n do
print(" Good morning ") ;
done ;
endif ;
done ;
6. Give the complexity of the following algorithms
(justify) :
Algorithm 2
Function test(I/n : integer) : integer;
Begin
S=0 ;
while(n>=1) do
if (n is even) then n=n/2 else n=n-1 ; endif ;
S=S+1 ;
done ;
return(s) ;
End;
6. Give the complexity of the following algorithms
(justify) :
Algorithm 3
S0 ; i 1 ;
while (i ≤ n) do
for j n2 to 5 with step -1 do
SS+1 ;
done
ii+2 ;
done ;
Return S ;

You might also like