CSC3303 - Note 1
CSC3303 - Note 1
The analysis is a process of estimating the efficiency of an algorithm and that is,
trying to know how good or how bad an algorithm could be. There are two main
parameters based on which we can analyse the algorithm:
1. Worst-case time complexity: For 'n' input size, the worst-case time
complexity can be defined as the maximum amount of time needed by an
algorithm to complete its execution. Thus, it is nothing but a function
defined by the maximum number of steps performed on an instance having
an input size of n. Computer Scientists are more interested in this.
2. Average case time complexity: For 'n' input size, the average case time
complexity can be defined as the average amount of time needed by an
algorithm to complete its execution. Thus, it is nothing but a function
defined by the average number of steps performed on an instance having
an input size of n.
3. Best case time complexity: For 'n' input size, the best-case time
complexity can be defined as the minimum amount of time needed by an
algorithm to complete its execution. Thus, it is nothing but a function
defined by the minimum number of steps performed on an instance having
an input size of n.
Complexity of Algorithms
1
The term algorithm complexity measures how many steps are required by the
algorithm to solve the given problem. It evaluates the order of count of
operations executed by an algorithm as a function of input data size. To assess
the complexity, the order (approximation) of the count of operation is always
considered instead of counting the exact steps. O(f) notation represents the
complexity of an algorithm, which is also termed as an Asymptotic notation
or "Big O" notation. Here the f corresponds to the function whose size is the
same as that of the input data. The complexity of the asymptotic computation
O(f) determines in which order the resources such as CPU time, memory, etc.
are consumed by the algorithm that is articulated as a function of the size of
the input data. The complexity can be found in any form such as constant,
logarithmic, linear, n*log(n), quadratic, cubic, exponential, etc. It is nothing
but the order of constant, logarithmic, linear and so on, the number of steps
encountered for the completion of a particular algorithm. To make it even
more precise, we often call the complexity of an algorithm as "running time".
2
hold a necessary consequence for the operation count order, so it is usually
omitted.
3. Linear Complexity Imposes a complexity of O (N). It encompasses the
same number of steps as that of the total number of elements to implement
an operation on N elements. For example, if there exist 500 elements, then
it will take about 500 steps. Basically, in linear complexity, the number of
elements linearly depends on the number of steps. For example, the number
of steps for N elements can be N/2 or 3*N. It also imposes a run time of
O(n*log(n)). It undergoes the execution of the order N*log(N) on N
number of elements to solve the given problem. For a given 1000 elements,
the linear complexity will execute 10,000 steps for solving a given
problem.
4. Quadratic Complexity It imposes a complexity of O (n2 ). For N input
data size, it undergoes the order of N2 count of operations on N number of
elements for solving a given problem. If N = 100, it will endure 10,000
steps. In other words, whenever the order of operation tends to have a
quadratic relation with the input data size, it results in quadratic
complexity. For example, for N number of elements, the steps are found to
be in the order of 3*N2 /2.
5. Cubic Complexity It imposes a complexity of O (n3 ). For N input data
size, it executes the order of N3 steps on N elements to solve a given
problem. For example, if there exist 100 elements, it is going to execute
1,000,000 steps.
6. Exponential Complexity It imposes a complexity of O(2n ), O(N!), O(nk
),.. For N elements, it will execute the order of count of operations that is
exponentially dependable on the input data size. For example, if N = 10,
then the exponential function 2N will result in 1024. Similarly, if N = 20, it
will result in 1048 576, and if N = 100, it will result in a number having 30
digits. The exponential function N! grows even faster; for example, if N =
3
5 will result in 120. Likewise, if N = 10, it will result in 3,628,800 and so
on. Since the constants do not hold a significant effect on the order of count
of operation, so it is better to ignore them. Thus, to consider an algorithm
to be linear and equally efficient, it must undergo N, N/2 or 3*N count of
operation, respectively, on the same number of elements to solve a
particular problem.
4
ALGORITHM DESIGN TECHNIQUES
5
2. Greedy Technique: Greedy method or technique is an algorithmic
paradigm that builds up a solution piece by piece, always choosing the next
piece that offers the most obvious and immediate benefit. So, the problems
were choosing locally optimal also leads to global solution are best fit for
Greedy. The Greedy method is used to solve the optimization problem. An
optimization problem is one in which we are given a set of input values,
which are required either to be maximized or minimized (known as
objective), i.e., some constraints or conditions.
• Greedy Algorithm always makes the choice (greedy criteria) looks best at
the moment, to optimize a given objective.
• The greedy algorithm doesn't always guarantee the optimal solution
however it generally produces a solution that is very close in value to the
optimal.
• Tower of Hanoi
• Dijkstra Shortest Path
• Fibonacci sequence
• Matrix chain multiplication
• Egg-dropping puzzle, etc
4. Branch and Bound The branch and bound method is a solution approach
that partitions the feasible solution space into smaller subsets of solutions,
can assume any integer value greater than or equal to zero is what gives
this model its designation as a total integer model. It is used for solving the
optimization problems and minimization problems. If we have given a
maximization problem then we can convert it using the Branch and bound
technique by simply converting the problem into a maximization problem.
An important advantage of branch-and-bound algorithms is that we can
control the quality of the solution to be expected, even if it is not yet found.
The cost of an optimal solution is only up to smaller than the cost of the
best computed one. Branch and bound is an algorithm design paradigm
which is generally used for solving combinatorial optimization problems.
Some examples of Branch-and-Bound Problems are:
7
• Knapsack problems
• Traveling Salesman Problem
• Job Assignment Problem, etc
8
• In some cases, it is used to find all feasible solutions to the
enumeration problem.
• Backtracking, on the other hand, is not regarded as an optimal
problem-solving technique. It is useful when the solution to a
problem does not have a time limit.
• Backtracking algorithms are used in;
➢ Finding all Hamiltonian paths present in a graph
➢ Solving the N-Queen problem
➢ Knights Tour problem, etc