0% found this document useful (0 votes)
4 views

6 Complexity of Algorithm

Algorithm complexity measures the number of steps required to solve a problem based on input size, using Big O notation to express this complexity. Various complexities include constant, logarithmic, linear, quadratic, cubic, and exponential, each representing different operational counts relative to input size. Additionally, algorithms can be categorized as iterative or recursive, with specific design techniques such as divide and conquer, greedy methods, dynamic programming, backtracking, and randomized algorithms for problem-solving.

Uploaded by

banmustafa66
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

6 Complexity of Algorithm

Algorithm complexity measures the number of steps required to solve a problem based on input size, using Big O notation to express this complexity. Various complexities include constant, logarithmic, linear, quadratic, cubic, and exponential, each representing different operational counts relative to input size. Additionally, algorithms can be categorized as iterative or recursive, with specific design techniques such as divide and conquer, greedy methods, dynamic programming, backtracking, and randomized algorithms for problem-solving.

Uploaded by

banmustafa66
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Complexity of Algorithm

The term algorithm complexity measures how many steps are required by the algorithm to
solve the given problem. It evaluates the order of count of operations executed by an
algorithm as a function of input data size.

To assess the complexity, the order (approximation) of the count of operation is always
considered instead of counting the exact steps.

O(f) notation represents the complexity of an algorithm, which is also termed as an


Asymptotic notation or "Big O" notation. Here the f corresponds to the function whose size
is the same as that of the input data. The complexity of the asymptotic computation O(f)
determines in which order the resources such as CPU time, memory, etc. are consumed by
the algorithm that is articulated as a function of the size of the input data.

The complexity can be found in any form such as constant, logarithmic, linear, n*log(n),
quadratic, cubic, exponential, etc. It is nothing but the order of constant, logarithmic, linear
and so on, the number of steps encountered for the completion of a particular algorithm. To
make it even more precise, we often call the complexity of an algorithm as "running time".

Typical Complexities of an Algorithm


o Constant Complexity:
It imposes a complexity of O(1). It undergoes an execution of a constant number of
steps like 1, 5, 10, etc. for solving a given problem. The count of operations is
independent of the input data size.
o Logarithmic Complexity:
It imposes a complexity of O(log(N)). It undergoes the execution of the order of
log(N) steps. To perform operations on N elements, it often takes the logarithmic
base as 2.
For N = 1,000,000, an algorithm that has a complexity of O(log(N)) would undergo 20
steps (with a constant precision).
o Linear Complexity:
o It imposes a complexity of O(N). It encompasses the same number of steps as
that of the total number of elements to implement an operation on N elements.
For example, if there exist 500 elements, then it will take about 500 steps. Basically,
in linear complexity, the number of elements linearly depends on the number of
steps. For example, the number of steps for N elements can be N/2 or 3*N.
o It also imposes a run time of O(n*log(n)). It undergoes the execution of the order
N*log(N) on N number of elements to solve the given problem.
For a given 1000 elements, the linear complexity will execute 10,000 steps for
solving a given problem.
o Quadratic Complexity: It imposes a complexity of O(n2). For N input data size, it
undergoes the order of N2 count of operations on N number of elements for solving a
given problem.
If N = 100, it will endure 10,000 steps. In other words, whenever the order of
operation tends to have a quadratic relation with the input data size, it results in
quadratic complexity. For example, for N number of elements, the steps are found to
be in the order of 3*N2/2.
o Cubic Complexity: It imposes a complexity of O(n3). For N input data size, it
executes the order of N3 steps on N elements to solve a given problem.
For example, if there exist 100 elements, it is going to execute 1,000,000 steps.
o Exponential Complexity: It imposes a complexity of O(2n), O(N!), O(nk), …. For N
elements, it will execute the order of count of operations that is exponentially
dependable on the input data size.
For example, if N = 10, then the exponential function 2N will result in 1024. Similarly,
if N = 20, it will result in 1048 576, and if N = 100, it will result in a number having
30 digits. The exponential function N! grows even faster; for example, if N = 5 will
result in 120. Likewise, if N = 10, it will result in 3,628,800 and so on.

Since the constants do not hold a significant effect on the order of count of operation, so it is
better to ignore them. Thus, to consider an algorithm to be linear and equally efficient, it
must undergo N, N/2 or 3*N count of operation, respectively, on the same number of
elements to solve a particular problem.

How to approximate the time taken by the


Algorithm?
So, to find it out, we shall first understand the types of the algorithm we have. There are two
types of algorithms:

1. Iterative Algorithm: In the iterative approach, the function repeatedly runs until
the condition is met or it fails. It involves the looping construct.
2. Recursive Algorithm: In the recursive approach, the function calls itself until the
condition is met. It integrates the branching structure.

However, it is worth noting that any program that is written in iteration could be written as
recursion. Likewise, a recursive program can be converted to iteration, making both of these
algorithms equivalent to each other.

But to analyze the iterative program, we have to count the number of times the loop is going
to execute, whereas in the recursive program, we use recursive equations, i.e., we write a
function of F(n) in terms of F(n/2).

Suppose the program is neither iterative nor recursive. In that case, it can be concluded that
there is no dependency of the running time on the input data size, i.e., whatever is the input
size, the running time is going to be a constant value. Thus, for such programs, the
complexity will be O(1).

For Iterative Programs


Consider the following programs that are written in simple English and does not correspond
to any syntax.

Example1:

In the first example, we have an integer i and a for loop running from i equals 1 to n. Now
the question arises, how many times does the name get printed?

1. A()
2. {
3. int i;
4. for (i=1 to n)
5. printf("Edward");
6. }

Since i equals 1 to n, so the above program will print Edward, n number of times. Thus, the
complexity will be O(n).

Example2:

1. A()
2. {
3. int i, j:
4. for (i=1 to n)
5. for (j=1 to n)
6. printf("Edward");
7. }

In this case, firstly, the outer loop will run n times, such that for each time, the inner loop will
also run n times. Thus, the time complexity will be O(n2).

Example3:

1. A()
2. {
3. i = 1; S = 1;
4. while (S<=n)
5. {
6. i++;
7. SS = S + i;
8. printf("Edward");
9. }
10. }

As we can see from the above example, we have two variables; i, S and then we have while
S<=n, which means S will start at 1, and the entire loop will stop whenever S value reaches
a point where S becomes greater than n.

Here i is incrementing in steps of one, and S will increment by the value of i, i.e., the
increment in i is linear. However, the increment in S depends on the i.

Initially;

i=1, S=1

After 1st iteration;

i=2, S=3

After 2nd iteration;

i=3, S=6

After 3rd iteration;

i=4, S=10 … and so on.

Since we don't know the value of n, so let's suppose it to be k. Now, if we notice the value of
S in the above case is increasing; for i=1, S=1; i=2, S=3; i=3, S=6; i=4, S=10; …

Thus, it is nothing but a series of the sum of first n natural numbers, i.e., by the time i
reaches k, the value of S will be k(k+1)/2.

To stop the loop, has to be greater than n, and when we solve this equation, we will

get > n. Hence, it can be concluded that we get a complexity of O(√n) in this case.

For Recursive Program


Consider the following recursive programs.

Example1:

1. A(n)
2. {
3. if (n>1)
4. return (A(n-1))
5. }

Solution;

Here we will see the simple Back Substitution method to solve the above problem.

T(n) = 1 + T(n-1) …Eqn. (1)

Step1: Substitute n-1 at the place of n in Eqn. (1)

T(n-1) = 1 + T(n-2) ...Eqn. (2)

Step2: Substitute n-2 at the place of n in Eqn. (1)

T(n-2) = 1 + T(n-3) …Eqn. (3)

Step3: Substitute Eqn. (2) in Eqn. (1)

T(n)= 1 + 1+ T(n-2) = 2 + T(n-2) …Eqn. (4)

Step4: Substitute eqn. (3) in Eqn. (4)

T(n) = 2 + 1 + T(n-3) = 3 + T(n-3) = …... = k + T(n-k) …Eqn. (5)

Now, according to Eqn. (1), i.e. T(n) = 1 + T(n-1), the algorithm will run until n>1. Basically,
n will start from a very large number, and it will decrease gradually. So, when T(n) = 1, the
algorithm eventually stops, and such a terminating condition is called anchor condition, base
condition or stopping condition.

Thus, for k = n-1, the T(n) will become.

Step5: Substitute k = n-1 in eqn. (5)

T(n) = (n-1) + T(n-(n-1)) = (n-1) + T(1) = n-1+1

Hence, T(n) = n or O(n).

Algorithm Design Techniques


The following is a list of several popular design approaches:

1. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow
the divide & conquer techniques involve three steps:

o Divide the original problem into a set of subproblems.


o Solve every subproblem individually, recursively.
o Combine the solution of the subproblems (top level) into a solution of the whole
original problem.
2. Greedy Technique: Greedy method is used to solve the optimization problem. An
optimization problem is one in which we are given a set of input values, which are required
either to be maximized or minimized (known as objective), i.e. some constraints or
conditions.

o Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
o The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.

3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all


possible small problems and then combine them to obtain solutions for bigger problems.

This is particularly helpful when the number of copying subproblems is exponentially large.
Dynamic Programming is frequently related to Optimization Problems.

4. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find
the right one. It is a depth-first search of the set of possible solution. During the search, if an
alternative doesn't work, then backtrack to the choice point, the place which presented
different alternatives, and tries the next alternative.

5. Randomized Algorithm: A randomized algorithm uses a random number at least once


during the computation make a decision.

Example 1: In Quick Sort, using a random number to choose a pivot.

Example 2: Trying to factor a large number by choosing a random number as possible


divisors.

Analyzing Algorithm Control Structure


To analyze a programming code or algorithm, we must notice that each instruction affects
the overall performance of the algorithm and therefore, each instruction must be analyzed
separately to analyze overall performance. However, there are some algorithm control
structures which are present in each programming code and have a specific asymptotic
analysis.

Some Algorithm Control Structures are:

1. Sequencing
2. If-then-else
3. for loop
4. While loop
1. Sequencing:
Suppose our algorithm consists of two parts A and B. A takes time t A and B takes time tB for
computation. The total computation "tA + tB" is according to the sequence rule. According to
maximum rule, this computation time is (max (tA,tB)).

Example:

Suppose tA =O (n) and tB = θ (n2).


Then, the total computation time can be calculated as

Computation Time = tA + tB
= (max (tA,tB)
= (max (O (n), θ (n2)) = θ (n2)

2. If-then-else:

The total time computation is according to the condition rule-"if-then-else." According to the
maximum rule, this computation time is max (tA,tB).

Example:

Suppose tA = O (n2) and tB = θ (n2)


Calculate the total computation time for the following:

Total Computation = (max (tA,tB))


= max (O (n2), θ (n2) = θ (n2)

3. For loop:
The general format of for loop is:
1. For (initialization; condition; updation)
2.
3. Statement(s);

Complexity of for loop:


The outer loop executes N times. Every time the outer loop executes, the inner loop
executes M times. As a result, the statements in the inner loop execute a total of N * M
times. Thus, the total complexity for the two loops is O (N2)

Consider the following loop:

1. for i ← 1 to n
2. {
3. P (i)
4. }

If the computation time ti for ( PI) various as a function of "i", then the total computation
time for the loop is given not by a multiplication but by a sum i.e.

1. For i ← 1 to n
2. {
3. P (i)
4. }

Takes

If the algorithms consist of nested "for" loops, then the total computation time is

For i ← 1 to n
{

For j ← 1 to n
{
P (ij)
}
}

Example:

Consider the following "for" loop, Calculate the total computation time for the following:

1. For i ← 2 to n-1
2. {
3. For j ← 3 to i
4. {
5. Sum ← Sum+A [i] [j]
6. }
7. }

Solution:

The total Computation time is:

4. While loop:
The Simple technique for analyzing the loop is to determine the function of variable involved
whose value decreases each time around. Secondly, for terminating the loop, it is necessary
that value must be a positive integer. By keeping track of how many times the value of
function decreases, one can obtain the number of repetition of the loop. The other approach
for analyzing "while" loop is to treat them as recursive algorithms.

Algorithm:
1. 1. [Initialize] Set k: =1, LOC: =1 and MAX: = DATA [1]
2. 2. Repeat steps 3 and 4 while K≤N
3. 3. if MAX<DATA [k],then:
4. Set LOC: = K and MAX: = DATA [k]
5. 4. Set k: = k+1
6. [End of step 2 loop]
7. 5. Write: LOC, MAX
8. 6. EXIT

Example:

The running time of algorithm array Max of computing the maximum element in an array of
n integer is O (n).

Solution:

1. array Max (A, n)


2. 1. Current max ← A [0]
3. 2. For i ← 1 to n-1
4. 3. do if current max < A [i]
5. 4. then current max ← A [i]
6. 5. return current max.

The number of primitive operation t (n) executed by this algorithm is at least.

1. 2 + 1 + n +4 (n-1) + 1=5n
2. 2 + 1 + n + 6 (n-1) + 1=7n-2

The best case T(n) =5n occurs when A [0] is the maximum element. The worst case T(n) =
7n-2 occurs when element are sorted in increasing order.

We may, therefore, apply the big-Oh definition with c=7 and n 0=1 and conclude the running
time of this is O (n).

Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its values on
smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the
natural numbers that satisfy the recurrence.

For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.

T (n) = θ (1) if n=1

2T + θ (n) if n>1

There are four methods for solving Recurrence:

1. Substitution Method
2. Iteration Method
3. Recursion Tree Method
4. Master Method

1. Substitution Method:
The Substitution Method Consists of two main steps:

1. Guess the Solution.


2. Use the mathematical induction to find the boundary condition and shows that the
guess is correct.

For Example1 Solve the equation by Substitution Method.

T (n) = T + n

We have to show that it is asymptotically bound by O (log n).

Solution:

For T (n) = O (log n)

We have to show that for some constant c

1. T (n) ≤c logn.

Put this in given Recurrence Equation.

T (n) ≤c log + 1

≤c log + 1 = c logn-clog2 2+1


≤c logn for c≥1
Thus T (n) =O logn.

Example2 Consider the Recurrence

T (n) = 2T + n n>1

Find an Asymptotic bound on T.

Solution:

We guess the solution is O (n (logn)).Thus for constant 'c'.


T (n) ≤c n logn
Put this in given Recurrence Equation.
Now,

T (n) ≤2c log +n


≤cnlogn-cnlog2+n
=cn logn-n (clog2-1)
≤cn logn for (c≥1)
Thus T (n) = O (n logn).
2. Iteration Methods
It means to expand the recurrence and express it as a summation of terms of n and initial
condition.

Example1: Consider the Recurrence

1. T (n) = 1 if n=1
2. = 2T (n-1) if n>1

Solution:

T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)

Repeat the procedure for i times

T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1

Example2: Consider the Recurrence

1. T (n) = T (n-1) +1 and T (1) = θ (1).

Solution:

T (n) = T (n-1) +1
= (T (n-2) +1) +1 = (T (n-3) +1) +1+1
= T (n-4) +4 = T (n-5) +1+4
= T (n-5) +5= T (n-k) + k
Where k = n-1
T (n-k) = T (1) = θ (1)
T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).

Recursion Tree Method


1. Recursion Tree Method is a pictorial representation of an iteration method which is in the
form of a tree where at each level nodes are expanded.

2. In general, we consider the second term in recurrence as root.

3. It is useful when the divide & Conquer algorithm is used.


4. It is sometimes difficult to come up with a good guess. In Recursion tree, each root and
child represents the cost of a single subproblem.

5. We sum the costs within each of the levels of the tree to obtain a set of pre-level costs
and then sum all pre-level costs to determine the total cost of all levels of the recursion.

6. A Recursion Tree is best used to generate a good guess, which can be verified by the
Substitution Method.

Example 1

Consider T (n) = 2T + n2

We have to obtain the asymptotic bound using recursion tree method.

Solution: The Recursion tree for the above recurrence is

Example 2: Consider the following recurrence

T (n) = 4T +n

Obtain the asymptotic bound using recursion tree method.

Solution: The recursion trees for the above recurrence


Example 3: Consider the following recurrence

Obtain the asymptotic bound using recursion tree method.

Solution: The given Recurrence has the following recursion tree

When we add the values across the levels of the recursion trees, we get a value of n for
every level. The longest path from the root to leaf is

You might also like