CC104 - Module 2
CC104 - Module 2
ALGORITHM
1. Algorithm Basic
2. Asymptotic Analysis
3. Divide and Conquer
4. Dynamic Programming
1|Page
LESSON 1
ALGORITHM BASICS
From the data structure point of view, following are some important categories of algorithms −
Search − Algorithm to search an item in a data structure.
Sort − Algorithm to sort items in a certain order.
Insert − Algorithm to insert item in a data structure.
Update − Algorithm to update an existing item in a data structure.
Delete − Algorithm to delete an existing item from a data structure.
Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the following
characteristics −
Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases),
and their inputs/outputs should be clear and must lead to only one meaning.
Input − An algorithm should have 0 or more well-defined inputs.
Output − An algorithm should have 1 or more well-defined outputs, and should match the
desired output.
Finiteness − Algorithms must terminate after a finite number of steps.
Feasibility − Should be feasible with the available resources.
Independent − An algorithm should have step-by-step directions, which should be
independent of any programming code.
There are no well-defined standards for writing algorithms. Rather, it is problem and resource
dependent. Algorithms are never written to support a particular programming code.
As we know that all programming languages share basic code constructs like loops (do, for, while),
flow-control (if-else), etc. These common constructs can be used to write an algorithm.
We write algorithms in a step-by-step manner, but it is not always the case. Algorithm writing is a
process and is executed after the problem domain is well-defined. That is, we should know the
problem domain, for which we are designing a solution.
2|Page
Example
Let's try to learn algorithm-writing by using an example.
Problem − Design an algorithm to add two numbers and display the result.
Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP
Algorithms tell the programmers how to code the program. Alternatively, the algorithm can be
written as −
Step 1 − START ADD
Step 2 − get values of a & b
Step 3 − c ← a + b
Step 4 − display c
Step 5 − STOP
In design and analysis of algorithms, usually the second method is used to describe an algorithm. It
makes it easy for the analyst to analyze the algorithm ignoring all unwanted definitions. He can
observe what operations are being used and how the process is flowing.
Writing step numbers, is optional.
We design an algorithm to get a solution of a given problem. A problem can be solved in more than
one way.
Hence, many solution algorithms can be derived for a given problem. The next step is to analyze
those proposed solution algorithms and implement the best suitable solution.
3|Page
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before implementation and after
implementation. They are the following −
A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an algorithm is
measured by assuming that all other factors, for example, processor speed, are constant
and have no effect on the implementation.
A Posterior Analysis − This is an empirical analysis of an algorithm. The selected algorithm is
implemented using programming language. This is then executed on target computer
machine. In this analysis, actual statistics like running time and space required, are
collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with the execution or
running time of various operations involved. The running time of an operation can be defined as
the number of computer instructions executed per operation.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm
X are the two main factors, which decide the efficiency of X.
Time Factor − Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
Space Factor − Space is measured by counting the maximum memory space required by the
algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by the
algorithm in terms of n as the size of input data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the
algorithm in its life cycle. The space required by an algorithm is equal to the sum of the following
two components −
A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and constants used,
program size, etc.
A variable part is a space required by variables; whose size depends on the size of the
problem. For example, dynamic memory allocation, recursion stack space, etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I) is the
variable part of the algorithm, which depends on instance characteristic I. Following is a simple
example that tries to explain the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now, space depends
on data types of given variables and constant types and it will be multiplied accordingly.
4|Page
Time Complexity
Time complexity of an algorithm represents the amount of time required by the algorithm to run to
completion. Time requirements can be defined as a numerical function T(n), where T(n) can be
measured as the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational
time is T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here, we observe that
T(n) grows linearly as the input size increases.
5|Page
LESSON 2
ASYMPTOTIC ANALYSIS
Asymptotic analysis of an algorithm refers to defining the mathematical foundation/framing of its
run-time performance. Using asymptotic analysis, we can very well conclude the best case, average
case, and worst-case scenario of an algorithm.
Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to work
in a constant time. Other than the "input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation in mathematical units of
computation. For example, the running time of one operation is computed as f(n) and may be for
another operation it is computed as g(n2). This means the first operation running time will increase
linearly with the increase in n and the running time of the second operation will increase
exponentially when n increases. Similarly, the running time of both operations will be nearly the
same if n is significantly small.
Usually, the time required by an algorithm falls under three types −
Best Case − Minimum time required for program execution.
Average Case − Average time required for program execution.
Worst Case − Maximum time required for program execution.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity of
an algorithm.
Ο Notation
Ω Notation
θ Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or the longest amount of time an algorithm can possibly
take to complete.
6|Page
For example, for a function f(n)
Ο(f(n)) = { g(n) : there exists c > 0 and n such that f(n) ≤ c.g(n) for all n > n . }
0 0
Omega Notation, Ω
The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or the best amount of time an algorithm can possibly take
to complete.
Theta Notation, θ
The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. It is represented as follows −
7|Page
Common Asymptotic Notations
constant − Ο(1)
logarithmic − Ο(log n)
linear − Ο(n)
quadratic − Ο(n2)
cubic − Ο(n3)
polynomial − nΟ(1)
exponential − 2Ο(n)
8|Page
LESSON 3
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should represent
a part of the original problem. This step generally takes a recursive approach to divide the problem
until no sub-problem is further divisible. At this stage, sub-problems become atomic in nature but
still represent some part of the actual problem.
Conquer/Solve
This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems
are considered 'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until they
formulate a solution of the original problem. This algorithmic approach works recursively and
conquer & merge steps works so close that they appear as one.
9|Page
Examples
The following computer algorithms are based on divide-and-conquer programming approach −
Merge Sort
Quick Sort
Binary Search
Strassen's Matrix Multiplication
Closest pair (points)
There are various ways available to solve any computer problem, but the mentioned are a good
example of divide and conquer approach.
10 | P a g e
LESSON 4
DYNAMIC PROGRAMMING
Dynamic programming approach is similar to divide and conquer in breaking down the problem
into smaller and yet smaller possible sub-problems. But unlike, divide and conquer, these sub-
problems are not solved independently. Rather, results of these smaller sub-problems are
remembered and used for similar or overlapping sub-problems.
Dynamic programming is used where we have problems, which can be divided into similar sub-
problems, so that their results can be re-used. Mostly, these algorithms are used for optimization.
Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the
previously solved sub-problems. The solutions of sub-problems are combined in order to achieve
the best solution.
So we can say that −
The problem should be able to be divided into smaller overlapping sub-problem.
An optimum solution can be achieved by using an optimum solution of smaller sub-
problems.
Dynamic algorithms use Memoization.
Comparison
In contrast to greedy algorithms, where local optimization is addressed, dynamic algorithms are
motivated for an overall optimization of the problem.
In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall
solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a
bigger sub-problem. Dynamic algorithms use Memoization to remember the output of already
solved sub-problems.
Example
The following computer problems can be solved using dynamic programming approach −
11 | P a g e