0% found this document useful (0 votes)
10 views8 pages

Complexity Analysis+Divide and Conquer+Dynamic Programming

The document discusses algorithm complexity analysis, focusing on time and space complexity, and introduces Big O notation for evaluating algorithm performance. It explains various properties of Big O notation and classifies common algorithm complexities, such as O(log n) and O(n!). Additionally, it covers algorithmic approaches like divide and conquer and dynamic programming, emphasizing their applications and the importance of optimizing solutions through techniques like memoization.

Uploaded by

Patson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views8 pages

Complexity Analysis+Divide and Conquer+Dynamic Programming

The document discusses algorithm complexity analysis, focusing on time and space complexity, and introduces Big O notation for evaluating algorithm performance. It explains various properties of Big O notation and classifies common algorithm complexities, such as O(log n) and O(n!). Additionally, it covers algorithmic approaches like divide and conquer and dynamic programming, emphasizing their applications and the importance of optimizing solutions through techniques like memoization.

Uploaded by

Patson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Chap : Algorithm complexity analysis

Complexity analysis is defined as a technique to characterize the resources taken by an algorithm with
respect to input size (independent from the machine, language and compiler). It is used for evaluating
the variations of execution time on different algorithms. The resources evaluated when analyzing the
complexity of an algorithm are mainly time (number of instructions use to run the algorithm), and
space (The amount of memory use by the algorithm).
1. Asymptotic notation in complexity analysis
1. Big O notation
Big O notation is special notation that tells you how fast an algorithm is. Who cares? Well, it turns out
that you’ll use other people’s algorithms often—and when you do, it’s nice to understand how fast or
slow they are; for some complex algorithm you writes it may become interesting of testing how
efficient it is. In this section, I’ll explain what Big O notation is and give you a list of the most common
running times for algorithms using it.
Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it gives the
worst-case complexity of an algorithm. By using big O- notation, we can asymptotically limit the
expansion of a running time to a range of constant factors above and below. It is a model for
quantifying algorithm performance.
The following graph shape curves of running time according the input size, using big O notation.

2. Properties of big O notation property


Reflexivity:
For any function f(n), f(n) = O(f(n)).
Example: f(n) = n2, then f(n) = O(n2).
Transitivity:
If f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)).
Example: f(n) = n3, g(n) = n2, h(n) = n4. Then f(n) = O(g(n)) and g(n) = O(h(n)). Therefore, f(n) =
O(h(n)).
Constant Factor:
For any constant c > 0 and functions f(n) and g(n), if f(n) = O(g(n)), then cf(n) = O(g(n)).
Example: f(n) = n, g(n) = n2. Then f(n) = O(g(n)). Therefore, 2f(n) = O(g(n)).
Sum Rule:
If f(n) = O(g(n)) and h(n) = O(g(n)), then f(n) + h(n) = O(g(n)).
Example: f(n) = n2, g(n) = n3, h(n) = n4. Then f(n) = O(g(n)) and h(n) = O(g(n)). Therefore, f(n) + h(n)
= O(g(n)).
Product Rule:
If f(n) = O(g(n)) and h(n) = O(k(n)), then f(n) * h(n) = O(g(n) * k(n)).
Example: (n) = n, g(n) = n 2, h(n) = n3, k(n) = n4. Then f(n) = O(g(n)) and h(n) = O(k(n)). Therefore, f(n)
* h(n) = O(g(n) * k(n)) = O(n5).
Composition Rule:
If f(n) = O(g(n)) and g(n) = O(h(n)), then f(g(n)) = O(h(n)).
Example: f(n) = n2, g(n) = n, h(n) = n 3. Then f(n) = O(g(n)) and g(n) = O(h(n)). Therefore, f(g(n)) =
O(h(n)) = O(n3).
3. Classification of some common algorithm using big O notation
Here are five Big O run times that you’ll encounter a lot, sorted from fastest to slowest:
• O(log n), also known as log time. Example: Binary search.
• O(n), also known as linear time. Example: Simple search.
• O(n * log n). Example: A fast sorting algorithm, like quicksort we will see later
• O(n2). Example: A slow sorting algorithm, like selection sort (coming up in a next chapter).
• O(n!). Example: A really slow algorithm, like the traveling salesperson (coming up in a next chapter).
2. Time complexity
The time complexity of an algorithm is defined as the amount of time taken by an algorithm to run as a
function of the length of the input. Note that the time to run is a function of the length of the input and
not the actual execution time of the machine on which the algorithm is running on. To estimate the time
complexity, we need to consider the cost of each fundamental instruction and the number of times the
instruction is executed.
If we have statements with basic operations like comparisons, return statements, assignments, and
reading a variable etc. we can assume they take constant time each, O(1).

Example: compute the time complexity of the following algorithm


a ← 5; //Statement1

if( a==5) return true; //Statement2

x← 4>5 ? 1:0; //Statement3

flag←true; //Statement4

solution:
total time = time(statement1) + time(statement2) + ... time (statementN)
Assuming that n is the size of the input, let’s use T(n) to represent the overall time and t to represent the
amount of time that a statement or collection of statements takes to execute.
T(n) = t(statement1) + t(statement2) + ... + t(statementN);

so, T(n)= O(1), which means constant complexity.

Application: calculate the time complexity of the following algorithm.

for (int i = 0; i < n; i++)

write(“Hello World”);

endFor

3. space complexity
The amount of memory required by the algorithm to solve a given problem is called the space complexity of the
algorithm. Problem-solving using a computer requires memory to hold temporary data or final result while the
program is in execution. The space Complexity of an algorithm is the total space taken by the algorithm with
respect to the input size. Space complexity includes both Auxiliary space and space used by input.
Space complexity is a parallel concept to time complexity. If we need to create an array of size n, this will
require O(n) space. If we create a two-dimensional array of size n*n, this will require O(n2) space.
In recursive calls stack space also counts.

Int function add (int n)

begin

if (n <= 0)

return 0;
endif

return n + add (n-1);

end

Here each call add a level to the stack :

1. add(4)

2. -> add(3)

3. -> add(2)

4. -> add(1)

5. -> add(0)

Each of these calls is added to call stack and takes up actual memory. So it takes

O(n) space.

Let’s look at the below function :

int function addSequence (int n)

begin

int sum = 0;

for (int i = 0; i < n; i++)

sum += pairSum(i, i+1);

endfor

return sum;

end

int funtcion pairSum(int x, int y)

begin

return x + y;

end

There will be roughly O(n) calls to pairSum. However, those calls do not exist

simultaneously on the call stack,so you only need O(1) space.


Chap : Algorithm approaches to
solve problems
Sometimes you’ll come across a problem that can’t be solved by any algorithm approach you’ve
learned so far -iterative technique-. When a good algorithm solver comes across such a problem it
mater checking inside the toolbox full of techniques, not yet explored on that problem. This chapter is
about presenting you some common approaches to solve algorithm problems, and also be able to
identify problems that fit a particular approach. The common techniques are greedy, dynamic
programming, divide and conquer.

Divide and conquer

definition
Divide and conquer is a problem-solving approach that involves breaking down a complex problem
into smaller, more manageable subproblems in order to solve the original problem.

Characteristic
The divide and conquer programming paradigm requires that the problem to be solved be
decomposable into subproblems, and this decomposition can be recursive. This is also called a top-
down approach.
The divide and conquer approach has three steps: divide, conquer, and combine. Only the first two
steps are made explicit in the name of the divide and conquer paradigm, but a step of combining the
solutions to the subproblems is necessary to solve the general problem. Let's study each of these three
steps in more detail.
• The "divide" step consists of breaking down the main problem into subproblems.
• The "conquer" step consists of solving each of the subproblems individually.
• The "combine" step consists of merging all of the results obtained for each of the subproblems
in order to obtain the final result of the solution to the original problem.
So the general algorithm of the principle is as follows.

AlgorithmDAndC(P)

begin
if Small(P) then

return Solution(P);

else

divide P into smallerinstances P1, P2...Pk K>1

Apply DAndC to eachof thesesubproblem

return Combine(DAndC(P1),DandC(P2)...DandC(Pk));

endif

end

Applications
• Quicksort
array function quicksort(array)
begin
//Base case: arrays with 0 or 1 element are already “sorted.”

if lenght(array) < 2:
return array
else //Recursive case
pivot = array[0]
less = [i for i in array[1:] if i <= pivot] //Sub-array of all the elements less than the pivot

greater = [i for i in array[1:] if i > pivot]//Sub-array of all the elements greater than the pivot

return quicksort(less) + [pivot] + quicksort(greater)


end

• Dichotomic research

Dynamic programming
Dynamic programming (DP) is an algorithmic method of solving optimization problems. Programming in this context
refers to mathematical programming, which is a synonym for optimization. DP solves a problem by combining the
solutions to its sub-problems. The famous divide-and-conquer method also solves a problem in a similar manner. The
divide-and-conquer method divides a problem into independent sub-problems, whereas in DP, either the sub-problems
depend on the solution sets of other sub-problems or the sub-problems appear repeatedly. DP uses the dependency of
the sub-problems and attempts to solve a subproblem only once; it then stores its solution in a table for future lookup.
This strategy help avoiding the time spent on recalculating solutions to old sub-problems, resulting in an efficient
algorithm.
To illustrate de DP we use the Fibonacci series defined as follows:
f(0)=0, f(1)=1 and f(n)=f(n-1) + f(n-2) for n greater than 1
an implementation of this algorithm using recursion is given below.
Var int fibonacci(var int n)
begin
if (n==1)
return 1;
else
if (n==0)
return 0;
else
return fibonacci(n-1)+fionacci(n-2);
endif
endif
end
This algorithm quickly becomes very slow for relatively small values. For n=50 for example this can algorithm
can take up to more than a hour to terminate. Let’ s analyze the problem.
1.1 Problem analysis
If we analyze the algorithm for n=6 as shown on the following graph:

By carefully observing the diagram above, you will have noticed that many calculations are unnecessary,
because they are carried out many times: for example, there are 2 calls of f(4), 3 calls of f(3) , 5 calls of f(2) etc.
We could therefore greatly simplify the calculation by calculating all repeated computation once and for all,
"memorizing" the result and reusing it when necessary.
1.2 Solution: Memoization
we can solves the problem iteratively by constructing a table in a bottom-up fashion. A top-down
approach, on the other hand, seems infeasible, from this simple recursive algorithm. In fact, the
unnecessary recomputations that prevent the recursive algorithm from being efficient can be avoided
by recording all the computed solutions along the way. This idea of constructing a table in a top-down
recursive fashion is called memoization. A more optimized solution to solve our initial problem is
provided bellow

int fib_mem(var int n)


var int[n+1] mem ;
var int fib_mem_c(var int n, var int[] m) ; // declaring a function
begin
return fib_mem_c(n, mem) ;
end

var int fib_mem_c(var int n, var int[] m)


begin
if(n==0 or n==1)
m[n]=1;
return 1;
else
if (m[n]>0)
return m[n]
else
m[n]=fib_mem_c(n-1,m) + fib_mem_c(n-2,m)
return m[n]
endif
endif
end

Exercise: calculate the time complexity of this algorithm for both recursive and DP approaches

You might also like