Analysis and Design of Algorithms - Handout

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

University of Buea

Faculty of Engineering and Technology

Analysis and Design of Algorithms

Level 400
Course outline
Analysis and Design of Algorithms

Introduction
Complexity of algorithms
Maxima finding
Linear search
Sorting algorithm
A measure of the growth of functions
Classification of complexity
Greedy Algorithms
Greedy Change-making Algorithm
Greedy Routing Algorithm
Other classification of problems
What is an algorithm
A finite set (or sequence) of precise instructions for performing a
computation.
A finite set of steps that specify a sequence of operations to be
carried out in order to solve a specific problem

Properties of Algorithms
1. Finiteness
2. Absence of Ambiguity
3. Definition of Sequence
4. Feasibility
5. Input
6. Output
Algorithms
• Properties of algorithms:

• Input from a specified set,


• Output from a specified set (solution),
• Definiteness of every step in the computation,
• Correctness of output for every possible input,
• Finiteness of the number of calculation steps,
• Effectiveness of each calculation step and
• Generality for a class of problems.
What is an algorithm
A finite set (or sequence) of precise instructions for
performing a computation.

Example: Maxima finding

procedure max (a1, a2, …, an: integers)


max := a1
for i :=2 to n
if max<a1 then max := ai
return max {the largest element}
Flowchart for maxima finding
start
Given n elements, can
max := a1
you count the total
number of operations?
i: = 2

no
max <ai
yes
max = ai i: = i + 1

no
i = n?

yes

end
Complexity Analysis
• An objective way to evaluate the cost of an algorithm or
code section.

• The cost is computed in terms of space or time, usually

• The goal is to have a meaningful way to compare


algorithms based on a common measure.

• Complexity analysis has two phases,


– Algorithm analysis
– Complexity evaluation
Algorithm Analysis
• Algorithm analysis requires a set of rules to
determine how operations are to be counted.
• There is no generally accepted set of rules for
algorithm analysis.
• In some cases, an exact count of operations is desired;
in other cases, a general approximation is sufficient.
• The rules presented that follow are typical of those
intended to produce an exact count of operations.
Rules
1. We assume an arbitrary time unit.
2. Execution of one of the following operations takes
time 1:
1. assignment operation
2. single I/O operations
3. single Boolean operations, numeric comparisons
4. single arithmetic operations
5. function return
6. array index operations, pointer dereferences
More Rules
3. Running time of a selection statement (if, switch) is
the time for the condition evaluation + the maximum
of the running times for the individual clauses in the
selection.
4. Loop execution time is the sum, over the number of
times the loop is executed, of the body time + time
for the loop check and update operations, + time for
the loop setup.
Always assume that the loop executes the maximum
number of iterations possible
5. Running time of a function call is 1 for setup + the
time for any parameter calculations + the time
required for the execution of the function body.
Time complexity of algorithms
Measures the largest number of basic operations required to
execute an algorithm.

Example: Maxima finding


procedure max (a1, a2, …, an: integers)
max := a1 1 operation
for i :=2 to n n-1 times
if max<a1 then max := ai2 operations
return max {the largest element}

The total number of operations is 2n-1


Time complexity of algorithms
Example of linear search (Search x in a list )

k := 1 (1 operation)
while k ≤ ndo
if x = akthen found else k: = k+1} (2n operations)

The maximum number of operations is 2n+1. If we are lucky,


then search can end even in a single step.
Sorting algorithm
Let us consider the algorithm below
Sort a list

if then
Example of a sorting algorithm
start

i:=1

Given n elements, can


j: = i+1 you count the total
number of operations?
ai>aj
no
yes

swap ai, aj i: = i + 1
j: = j + 1

j = n? no
yes

i = n-1?
no
yes
end
Bubble Sort
Bubble Sort

The worst case time complexity is


(n-1) + (n-2) + (n-3) + … + 2 + 1
= n(n-1)/2
The Big-O notation
It is a measure of the growth of functions and often used to measure
the complexity of algorithms.

DEF. Let f and g be functions from the set of integers (or real
numbers) to the set of real numbers. Then f is O(g(x)) if there
are constants C and k, such that

|f(x)| ≤ C|g(x)| for all x>k

Intuitively, f(x) grows “slower than” some multiple of g(x) as x


grows without bound. Thus O(g(x)) defines an upper bound of
f(x).
The Big-O notation
y= 4x2
y = x2
4

1 2

Defines an upper bound of the growth of functions

Question: If f(x) is O(x2), is it also O(x3)?


The Big-O notation: The Growth of Functions

Question: If f(x) is O(x2), is it also O(x3)?

•Yes. x3 grows faster than x2, so x3 grows also faster than


f(x).

•Therefore, we always have to find the smallest simple


function g(x) for which f(x) is O(g(x)).
Exercise on Complexity
Let us consider the procedure below:
procedure who_knows(a1, a2, …, an: integers)
m := 0
for i := 1 to n-1
for j := i + 1 to n
if |ai – aj| > m then m := |ai – aj|
{m is the maximum difference between any two numbers in
the input sequence}

1- What does the following algorithm compute?

2- Evaluate, according to n, the number of comparisons.


Comparisons: n-1 + n-2 + n-3 + … + 1
= (n – 1)n/2 = 0.5n2 – 0.5n
Time complexity is O(n2).
Exercise on Complexity
Another algorithm solving the same problem:
procedure max_diff(a1, a2, …, an: integers)
min := a1
max := a1
for i := 2 to n
if ai < min then min := ai
else if ai > max then max := ai
m := max - min
1- Evaluate, according to n, the number of comparisons.
Comparisons: 2n – 2
Time complexity is O(n).
Useful Rules for Big-O
•For any polynomial f(x) = anxn + an-1xn-1 + … + a0, where
a0, a1, …, an are real numbers,
f(x) is O(xn).

•If f1(x) is O(g1(x)) and f2(x) is O(g2(x)), then


(f1 + f2)(x) is O(max(g1(x), g2(x)))

•If f1(x) is O(g(x)) and f2(x) is O(g(x)), then


(f1 + f2)(x) is O(g(x)).

•If f1(x) is O(g1(x)) and f2(x) is O(g2(x)), then


(f1f2)(x) is O(g1(x) g2(x)).
The Big-Ω (omega) notation
DEF. Let f and g be functions from the set of integers (or real
numbers) to the set of real numbers. Then f is Ω(g(x)) if there
are constants C and k, such that

|f(x)| ≥C|g(x)| for all x>k

Example. 7x2 + 9x + 4 is Ω(x2), since 7x2 + 9x + 4 ≥ 1. x2 for all x


Thus Ω defines the lower bound of the growth of a function

Question. Is 7x2 + 9x + 4 Ω(x)? If yes, how? Else Why?


The Big-Theta (Θ) notation
DEF. Let f and g be functions from the set of integers (or real
numbers) to the set of real numbers. Then f is Θ(g(x)) if there
are constants C1 and C2 a positive real number k, such that

C1.|g(x)| ≤ |f(x)| ≤ C2.|g(x)| for all x>k

Example. 7x2 + 9x + 4 is Θ(x2),


since 1. x2 ≤ 7x2 + 9x + 4 ≤ 8. x2 for all x> 10
Average case performance
EXAMPLE. Compute the average case complexity of the linear
search algorithm.
Classification of complexity
Some growth functions with Θ
Complexity Terminology
Θ(1) Constant complexity
Θ(log n) Logarithmic complexity
Θ(n) Linear complexity
Θ(nc) Polynomial complexity
Θ(bn) (b>1) Exponential complexity
Θ(n!) Factorial complexity

We also use such terms when Θ is replaced by O (big-O)


Greedy Algorithms
In optimization problems, algorithms that use the best choice at
each step are called greedy algorithms.

Example. Devise an algorithm for making change for n cents using


quarters, dimes, nickels, and pennies using the least number of
total coins?
Greedy Change-making Algorithm
Let c1, c2 ,…, cr be the denomination of the coins,and
ci> ci+1 Let the coins be
1, 5, 10, 25 cents. For
for i:= 1 to r making 38 cents, you
while n ≥ ci will use
begin
add a coin of value ci to the change 1 quarter
1 dime
n := n- ci 3 cents
end
Endfor The total count is 5,
and it is optimum.
Question. Is this optimal? Does it use the least
number of coins?
Greedy Change-making Algorithm
But if you don’t use a nickel, and you make a change for
30 cents using the same algorithm, the you will use 1 quarter
and 5 cents (total 6 coins). But the optimum is 3 coins
(use 3 dimes!)

So, greedy algorithms produce results, but the results


may be sub-optimal.
Greedy Routing Algorithm

A C B

If you need to reach point B from point A in the fewest number of hops,
then which route will you take? If the knowledge is local, then you are
tempted to use a greedy algorithm, and reach B in 5 hops, although it is
possible to reach B in only two hops.
Other classification of problems
• Problems that have polynomial worst-case complexity are
called tractable. Otherwise they are called intractable.
• Problems of higher complexity are called intractable.
• Problems for which no solution exists are known as
unsolvable problems (like the halting problems).
Otherwise they are called solvable.
• Many solvable problems are believed to have the property
that no polynomial time solution exists for them, but a
solution, if known, can be checked in polynomial time.
These belong to the class NP (as opposed to the class of
tractable problems that belong to class P)
The Halting Problems
The Halting problem asks the question.

Given a program and an input to the program, determine if the program


will eventually stop when it is given that input.

Take a trial solution


• Run the program with the given input. If the program stops, we know the
program stops.
• But if the program doesn't stop in a reasonable amount of time, then we
cannot conclude that it won't stop. Maybe we didn't wait long enough!

Not decidable in general!

You might also like