0% found this document useful (0 votes)
41 views9 pages

CSC3303 - Note 1

Uploaded by

khalid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views9 pages

CSC3303 - Note 1

Uploaded by

khalid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

ANALYSIS OF ALGORITHM

The analysis is a process of estimating the efficiency of an algorithm and that is,
trying to know how good or how bad an algorithm could be. There are two main
parameters based on which we can analyse the algorithm:

• Space Complexity: The space complexity can be understood as the amount of


space required by an algorithm to run to completion.

• Time Complexity: Time complexity is a function of input size n that refers to


the amount of time needed by an algorithm to run to completion.

Types of Time Complexity Analysis

1. Worst-case time complexity: For 'n' input size, the worst-case time
complexity can be defined as the maximum amount of time needed by an
algorithm to complete its execution. Thus, it is nothing but a function
defined by the maximum number of steps performed on an instance having
an input size of n. Computer Scientists are more interested in this.
2. Average case time complexity: For 'n' input size, the average case time
complexity can be defined as the average amount of time needed by an
algorithm to complete its execution. Thus, it is nothing but a function
defined by the average number of steps performed on an instance having
an input size of n.
3. Best case time complexity: For 'n' input size, the best-case time
complexity can be defined as the minimum amount of time needed by an
algorithm to complete its execution. Thus, it is nothing but a function
defined by the minimum number of steps performed on an instance having
an input size of n.

Complexity of Algorithms

1
The term algorithm complexity measures how many steps are required by the
algorithm to solve the given problem. It evaluates the order of count of
operations executed by an algorithm as a function of input data size. To assess
the complexity, the order (approximation) of the count of operation is always
considered instead of counting the exact steps. O(f) notation represents the
complexity of an algorithm, which is also termed as an Asymptotic notation
or "Big O" notation. Here the f corresponds to the function whose size is the
same as that of the input data. The complexity of the asymptotic computation
O(f) determines in which order the resources such as CPU time, memory, etc.
are consumed by the algorithm that is articulated as a function of the size of
the input data. The complexity can be found in any form such as constant,
logarithmic, linear, n*log(n), quadratic, cubic, exponential, etc. It is nothing
but the order of constant, logarithmic, linear and so on, the number of steps
encountered for the completion of a particular algorithm. To make it even
more precise, we often call the complexity of an algorithm as "running time".

Typical Complexities of an Algorithm

We take a look at the different types of complexities of an algorithm and one


or more of our algorithm or program will fall into any of the following
categories:

1. Constant Complexity Imposes a complexity of O (1). It undergoes an


execution of a constant number of steps like 1, 5, 10, etc. for solving a
given problem. The count of operations is independent of the input data
size.
2. Logarithmic Complexity Imposes a complexity of O (log(N)). It
undergoes the execution of the order of log (N) steps. To perform
operations on N elements, it often takes the logarithmic base as 2. For N =
1,000,000, an algorithm that has a complexity of O(log(N)) would undergo
20 steps (with a constant precision). Here, the logarithmic base does not

2
hold a necessary consequence for the operation count order, so it is usually
omitted.
3. Linear Complexity Imposes a complexity of O (N). It encompasses the
same number of steps as that of the total number of elements to implement
an operation on N elements. For example, if there exist 500 elements, then
it will take about 500 steps. Basically, in linear complexity, the number of
elements linearly depends on the number of steps. For example, the number
of steps for N elements can be N/2 or 3*N. It also imposes a run time of
O(n*log(n)). It undergoes the execution of the order N*log(N) on N
number of elements to solve the given problem. For a given 1000 elements,
the linear complexity will execute 10,000 steps for solving a given
problem.
4. Quadratic Complexity It imposes a complexity of O (n2 ). For N input
data size, it undergoes the order of N2 count of operations on N number of
elements for solving a given problem. If N = 100, it will endure 10,000
steps. In other words, whenever the order of operation tends to have a
quadratic relation with the input data size, it results in quadratic
complexity. For example, for N number of elements, the steps are found to
be in the order of 3*N2 /2.
5. Cubic Complexity It imposes a complexity of O (n3 ). For N input data
size, it executes the order of N3 steps on N elements to solve a given
problem. For example, if there exist 100 elements, it is going to execute
1,000,000 steps.
6. Exponential Complexity It imposes a complexity of O(2n ), O(N!), O(nk
),.. For N elements, it will execute the order of count of operations that is
exponentially dependable on the input data size. For example, if N = 10,
then the exponential function 2N will result in 1024. Similarly, if N = 20, it
will result in 1048 576, and if N = 100, it will result in a number having 30
digits. The exponential function N! grows even faster; for example, if N =

3
5 will result in 120. Likewise, if N = 10, it will result in 3,628,800 and so
on. Since the constants do not hold a significant effect on the order of count
of operation, so it is better to ignore them. Thus, to consider an algorithm
to be linear and equally efficient, it must undergo N, N/2 or 3*N count of
operation, respectively, on the same number of elements to solve a
particular problem.

A summary of these complexities is given below:

4
ALGORITHM DESIGN TECHNIQUES

An algorithm design technique (or “strategy” or “paradigm”) is a general


approach to solving problems algorithmically that is applicable to a variety of
problems from different areas of computing. Learning these techniques is of
utmost importance for the following reasons. • First, they provide guidance for
designing algorithms for new problems, i.e., problems for which there is no
known satisfactory algorithm. • Second, algorithms are the cornerstone of
computer science. Every science is interested in classifying its principal subject,
and computer science is no exception. Algorithm design techniques make it
possible to classify algorithms according to an underlying design idea; therefore,
they can serve as a natural way to both categorize and study algorithms.

Popular Algorithm Design Techniques

The following is a list of several popular design approaches:

1. Divide and Conquer Approach: The divide-and-conquer paradigm often


helps in the discovery of efficient algorithms. It is a top-down approach.
The algorithms which follow the divide & conquer techniques involve
three steps: • Divide the original problem into a set of sub-problems. •

Solve every sub-problem individually, recursively. • Combine the solution


of the sub-problems (top level) into a solution of the whole original
problem. Following are some standard algorithms that are of the Divide
and Conquer algorithms variety.
• Binary Search is a searching algorithm.
• Quicksort is a sorting algorithm.
• Merge Sort is also a sorting algorithm.
• Closest Pair of Points The problem is to find the closest pair of points in a
set of points in x-y plane.

5
2. Greedy Technique: Greedy method or technique is an algorithmic
paradigm that builds up a solution piece by piece, always choosing the next
piece that offers the most obvious and immediate benefit. So, the problems
were choosing locally optimal also leads to global solution are best fit for
Greedy. The Greedy method is used to solve the optimization problem. An
optimization problem is one in which we are given a set of input values,
which are required either to be maximized or minimized (known as
objective), i.e., some constraints or conditions.
• Greedy Algorithm always makes the choice (greedy criteria) looks best at
the moment, to optimize a given objective.
• The greedy algorithm doesn't always guarantee the optimal solution
however it generally produces a solution that is very close in value to the
optimal.

Examples of Greedy Algorithms

• Prim's Minimal Spanning Tree Algorithm.


• Travelling Salesman Problem.
• Graph – Map Coloring.
• Kruskal's Minimal Spanning Tree Algorithm.
• Dijkstra's Minimal Spanning Tree Algorithm.
• Graph – Vertex Cover.
• Knapsack Problem.
• Job Scheduling Problem.

3. Dynamic Programming: Dynamic Programming (DP) is an algorithmic


technique for solving an optimization problem by breaking it down into
simpler subproblems and utilizing the fact that the optimal solution to the
overall problem depends upon the optimal solution to its sub-problems.
Dynamic programming is both a mathematical optimization method and a
6
computer programming method. The method was developed by Richard
Bellman in the 1950s and has found applications in numerous fields, from
aerospace engineering to economics.
Dynamic programming is used where we have problems, which can be
divided into similar sub-problems, so that their results can be re-used.
Mostly, these algorithms are used for optimization. Before solving the in-
hand sub-problem, dynamic algorithm will try to examine the results of the
previously solved sub-problems.

Some examples of Dynamic Programming are;

• Tower of Hanoi
• Dijkstra Shortest Path
• Fibonacci sequence
• Matrix chain multiplication
• Egg-dropping puzzle, etc

4. Branch and Bound The branch and bound method is a solution approach
that partitions the feasible solution space into smaller subsets of solutions,
can assume any integer value greater than or equal to zero is what gives
this model its designation as a total integer model. It is used for solving the
optimization problems and minimization problems. If we have given a
maximization problem then we can convert it using the Branch and bound
technique by simply converting the problem into a maximization problem.
An important advantage of branch-and-bound algorithms is that we can
control the quality of the solution to be expected, even if it is not yet found.
The cost of an optimal solution is only up to smaller than the cost of the
best computed one. Branch and bound is an algorithm design paradigm
which is generally used for solving combinatorial optimization problems.
Some examples of Branch-and-Bound Problems are:

7
• Knapsack problems
• Traveling Salesman Problem
• Job Assignment Problem, etc

5. Backtracking Algorithm A backtracking algorithm is a problem-solving


algorithm that uses a brute force approach for finding the desired output.
The Brute force approach tries out all the possible solutions and chooses
the desired/best solutions. Backtracking is a general algorithm for finding
solutions to some computational problems, notably constraint satisfaction
problems, that incrementally builds candidates to the solutions, and
abandons a candidate ("backtracks") as soon as it determines that the
candidate cannot possibly be completed to a valid solution.
A backtracking algorithm uses the depth-first search method. When the
algorithm begins to explore the solutions, the abounding function is applied
so that the algorithm can determine whether the proposed solution satisfies
the constraints. If it does, it will keep looking. If it does not, the branch is
removed, and the algorithm returns to the previous level. In any
backtracking algorithm, the algorithm seeks a path to a feasible solution
that includes some intermediate checkpoints. If the checkpoints do not lead
to a viable solution, the problem can return to the checkpoints and take
another path to find a solution.
These are the following scenarios in which you can use the
backtracking:
• It is used to solve a variety of problems. You can use it, for example,
to find a feasible solution to a decision problem.
• Backtracking algorithms were also discovered to be very effective
for solving optimization problems.

8
• In some cases, it is used to find all feasible solutions to the
enumeration problem.
• Backtracking, on the other hand, is not regarded as an optimal
problem-solving technique. It is useful when the solution to a
problem does not have a time limit.
• Backtracking algorithms are used in;
➢ Finding all Hamiltonian paths present in a graph
➢ Solving the N-Queen problem
➢ Knights Tour problem, etc

6. Randomized Algorithm: A randomized algorithm is an algorithm that


employs a degree of randomness as part of its logic or procedure. ... In
some cases, probabilistic algorithms are the only practical means of solving
a problem. The output of a randomized algorithm on a given input is a
random variable. Thus, there may be a positive probability that the
outcome is incorrect. As long as the probability of error is small for every
possible input to the algorithm, this is not a problem.
There are two main types of randomized algorithms: Las Vegas
algorithms and Monte-Carlo algorithms.
Example 1: In Quick Sort, using a random number to choose a pivot.
Example 2: Trying to factor a large number by choosing a random number
as possible divisors.

You might also like