Mr. D.
Manikandan June 1, 2021
Design and Analysis of
Algorithm
Mr. D. Manikandan July 3, 2024
Introduction
An algorithm is a sequence of unambiguous instructions
for solving a problem, i.e., for obtaining a required output
for any legitimate input in a finite amount of time.
Mr. D. Manikandan July 3, 2024
Notation of Algorithm
• The non-ambiguity requirement for each step of
an algorithm cannot be compromised.
• The range of inputs for which an algorithm works
has to be specified carefully.
• The same algorithm can be represented in several
different ways.
• There may exist several algorithms for solving the
same problem.
• Algorithms for the same problem can be based on
very different ideas and can solve the problem
with dramatically different speeds.
Mr. D. Manikandan
Characteristics of an algorithm:
• Input: Zero / more quantities are externally
supplied.
• Output: At least one quantity is produced.
• Definiteness: Each instruction is clear and
unambiguous.
• Finiteness: If the instructions of an algorithm
is traced then for all cases the algorithm
must terminates after a finite number of
steps.
• Efficiency: Every instruction must be very
basic and runs in short time.
Mr. D. Manikandan July 3, 2024
Steps for Writing Algorithm
1. An algorithm is a procedure. It has two parts; the first part is head and the second part is body.
2. The Head section consists of keyword Algorithm and Name of the algorithm with parameter list.
E.g. Algorithm name1(p1, p2,…,p3)
The head section also has the following:
//Problem Description:
//Input:
//Output:
3. In the body of an algorithm various programming constructs like if, for, while and some statements like assignments are used.
4. The compound statements may be enclosed with { and } brackets. if, for, while can be closed by endif, endfor, endwhile
respectively. Proper indention is must for block.
5. Comments are written using // at the beginning.
6. The identifier should begin by a letter and not by digit. It contains alpha numeric letters after first letter. No need to mention data
types.
7. The left arrow “←” used as assignment operator. E.g. v←10
8. Boolean operators (TRUE, FALSE), Logical operators (AND, OR, NOT) and Relational operators (<,<=, >, >=,=, ≠, <>) are also
used.
9. Input and Output can be done using read and write.
10. Array[], if then else condition, branch and loop can be also used in algorithm.
Example : 1
The greatest common divisor(GCD) of two nonnegative integers m and n (not-both-zero), denoted gcd(m, n), is defined
as the largest integer that divides both m and n evenly, i.e., with a remainder of zero.
Euclid’s algorithm is based on applying repeatedly the equality
gcd(m, n) = gcd(n, m mod n),
where m mod n is the remainder of the division of m by n, until m mod n is equal to 0.
Since gcd(m,0) = m,
the last value of m is also the greatest common divisor of the initial m and n.
gcd(60, 24) can be computed as
gcd(60, 24) = gcd(24, 12) = gcd(12, 0) = 12.
Example : 2
For a set of two positive integers (a, b) we use the following steps to find the greatest common divisor of (13, 48).
Euclid’s algorithm for computing gcd(m, n) in simple steps
Step 1 If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2.
Step 2 Divide m by n and assign the value of the remainder to r.
Step 3 Assign the value of n to m and the value of r to n. Go to Step 1.
Euclid’s algorithm for computing gcd(m, n) expressed in pseudocode
ALGORITHM Euclid_gcd(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r ←m mod n
m←n
n←r
return m
Example : 2
For a set of two positive integers (a, b) we use the following steps to find the greatest common divisor of (13, 48).
Euclid’s algorithm is based on applying repeatedly the equality
gcd(m, n) = gcd(n, m mod n),
gcd(48, 13) can be computed as
gcd(48, 13) = gcd(13, 9)
= gcd(9, 4)
= gcd (4, 1)
= 1
Mr. D. Manikandan July 3, 2024
FUNDAMENTALS OF ALGORITHMIC PROBLEM SOLVING
Algorithmic Problem Solving
Understanding the Problem
This is the first step in designing of algorithm.
• Read the problem’s description carefully to understand
the problem statement completely.
• Ask questions for clarifying the doubts about the
problem.
• Identify the problem types and use existing algorithm to
find solution.
• Input (instance) to the problem and range of the input get
fixed.
Algorithmic Problem Solving
Decision Making
The Decision making is done on the following:
(a) Ascertaining the Capabilities of the Computational
Device
• In random-access machine (RAM), instructions are
executed one after another (The central assumption is
that one operation at a time). Accordingly, algorithms
designed to be executed on such machines are called
sequential algorithms.
• In some newer computers, operations are executed
concurrently, i.e., in parallel. Algorithms that take
advantage of this capability are called parallel algorithms.
• Choice of computational devices like Processor and
memory is mainly based on space and time efficiency
Algorithmic Problem Solving
Decision Making
Choosing between Exact and Approximate Problem
Solving
• The next principal decision is to choose between solving
the problem exactly or solving it approximately.
• An algorithm used to solve the problem exactly and
produce correct result is called an exact algorithm.
• If the problem is so complex and not able to get exact
solution, then we have to choose an algorithm called an
approximation algorithm. i.e., produces an approximate
answer.
• E.g., extracting square roots, solving nonlinear equations,
and evaluating definite integrals.
Algorithmic Problem Solving
Algorithm Design Technique
• An algorithm design technique (or “strategy” or “paradigm”) is a
general approach to solving problems algorithmically that is
applicable to a variety of problems from different areas of
computing.
• Algorithms + Data Structures = Programs
• Though Algorithms and Data Structures are independent, but they
are combined together to develop program.
• Hence the choice of proper data structure is required before
designing the algorithm.
• Implementation of algorithm is possible only with the help of
Algorithms and Data Structures
• Algorithmic strategy / technique / paradigm are a general E.g., Brute Force, Divide and Conquer, Dynamic
Programming, Greedy Technique and so on.
approach by which many problems can be solved algorithmically.
Algorithmic Problem Solving
Methods of Specifying an Algorithm
There are three ways to specify an algorithm. They are:
Algorithmic Problem Solving
Natural Language
It is very simple and easy to specify an algorithm using natural language.
But it is not clear many times and thereby we get brief specification.
Example: An algorithm to perform addition of two numbers.
Step 1: Read the first number, say a.
Step 2: Read the first number, say b.
Step 3: Add the above two numbers and store the result in c.
Step 4: Display the result from c.
Such a specification creates difficulty while actually implementing it.
Hence many programmers prefer to have specification of algorithm by means of Pseudocode.
Algorithmic Problem Solving
Pseudocode
• Pseudocode is a mixture of a natural language and programming language constructs.
• Pseudocode is usually more precise than natural language.
• For Assignment operation left arrow “←”, for comments two slashes “//”,if condition, for, while loops are used.
• ALGORITHM
Sum(a,b)
//Problem Description: This algorithm performs addition of two numbers
//Input: Two integers a and b
//Output: Addition of two integers
c←a+b
return c
• This specification is more useful for implementation of any language.
Algorithmic Problem Solving
Flowchart
• Flowchart is a graphical
representation of an algorithm.
• It is a a method of expressing
an algorithm by a collection of
connected geometric shapes
containing descriptions of the
algorithm’s steps.
Algorithmic Problem Solving
Flowchart
• Flowchart is a graphical
representation of an algorithm.
• It is a a method of expressing
an algorithm by a collection of
connected geometric shapes
containing descriptions of the
algorithm’s steps.
Proving an Algorithm’s Correctness
Algorithm Correctness
• Once an algorithm has been specified then its correctness must be proved.
• An algorithm must yields a required result for every legitimate input in a finite amount of time.
• For example, the correctness of Euclid’s algorithm for computing the greatest common divisor stems from
the correctness of the equality gcd(m, n) = gcd(n, m mod n).
• A common technique for proving correctness is to use mathematical induction because an algorithm’s
iterations provide a natural sequence of steps needed for such proofs.
• The notion of correctness for approximation algorithms is less straightforward than it is for exact
algorithms.
• The error produced by the algorithm should not exceed a predefined limit.
Analyzing an Algorithm
Algorithm Analysis
• For an algorithm the most important is efficiency. In fact, there are two kinds of algorithm efficiency.
• Time efficiency, indicating how fast the algorithm runs, and
• Space efficiency, indicating how much extra memory it uses.
• The efficiency of an algorithm is determined by measuring both time efficiency and space efficiency.
• So factors to analyze an algorithm are:
1. .Time efficiency of an algorithm
2.. Space efficiency of an algorithm
3. Simplicity of an algorithm
4.. Generality of an algorithm
Coding an Algorithm
Algorithm Coding
• The coding / implementation of an algorithm is done by a suitable programming language like C, C++, JAVA, python.
• The transition from an algorithm to a program can be done either incorrectly or very inefficiently.
• Implementing an algorithm correctly is necessary.
• The Algorithm power should not reduced by inefficient implementation.
• Standard tricks like computing a loop’s invariant (an expression that does not change its value) outside the loop,
collecting common subexpressions, replacing expensive operations by cheap ones, selection of programming
language and so on should be known to the programmer.
• Typically, such improvements can speed up a program only by a constant factor, whereas a better algorithm can
make a difference in running time by orders of magnitude. But once an algorithm is selected, a 10–50% speedup
may be worth an effort.
• It is very essential to write an optimized code (efficient code) to reduce the burden of compiler.
Important Problem Types
Problem Types
• Sorting
• Searching
• String Processing
• Graph problems
• Combinatorial problems
• Geometric problems
• Numerical problems
Important Problem Types
Sorting
• The sorting problem is to rearrange the items of a given list in nondecreasing (ascending)
order.
• Sorting can be done on numbers, characters, strings or records.
• To sort student records in alphabetical order of names or by student number or by student
grade-point average.
• Such a specially chosen piece of information is called a key.
• An algorithm is said to be in-place if it does not require extra memory, E.g., Quick sort.
• A sorting algorithm is called stable if it preserves the relative order of any two equal
elements in its input.
Important Problem Types
Searching
• The searching problem deals with finding a given value, called a
search key, in a given set.
• E.g., Ordinary Linear search and fast binary search.
Important Problem Types
String Processing
• A string is a sequence of characters from an alphabet.
• Strings comprise letters, numbers, and special characters; bit
strings, which comprise zeros and ones; and gene sequences, which
can be modeled by strings of characters from the four character
alphabet {A, C, G, T}.
• It is very useful in bioinformatics.
• Searching for a given word in a text is called string matching
Important Problem Types
Graph problems
• A graph is a collection of points called vertices, some of which are
connected by line segments called edges.
• Some of the graph problems are graph traversal, shortest path
algorithm, topological sort, traveling salesman problem and the
graph-coloring problem and so on.
Important Problem Types
Combinatorial problems
• These are problems that ask, explicitly or implicitly, to find a
combinatorial object such as a permutation, a combination, or a
subset that satisfies certain constraints.
• A desired combinatorial object may also be required to have some
additional property such a maximum value or a minimum cost.
• In practical, the combinatorial problems are the most difficult
problems in computing. The traveling salesman problem and the
graph coloring problem are examples of combinatorial problems.
Important Problem Types
Geomentric Algorithm
• Geometric algorithms deal with geometric objects such as points,
lines, and polygons.
• Geometric algorithms are used in computer graphics, robotics, and
tomography.
• The closest-pair problem and the convex-hull problem are comes
under this category.
Important Problem Types
Numerical Problem
• Numerical problems are problems that involve mathematical
equations, systems of equations, computing definite integrals,
evaluating functions, and so on.
• The majority of such mathematical problems can be solved only
approximately.
Algorithm Design Technique
Divide and Conquer Approach
• It is a top-down approach.
• The algorithms which follow the divide & conquer techniques involve
three steps:
• Divide the original problem into a set of subproblems.
• Solve every subproblem individually, recursively.
• Combine the solution of the subproblems (top level) into a solution of
the whole original problem.
Algorithm Design Technique
Greedy Technique
• Greedy method is used to solve the optimization problem.
• An optimization problem is one in which we are given a set of input values, which
are required either to be maximized or minimized (known as objective), i.e. some
constraints or conditions.
• Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
• The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.
Algorithm Design Technique
Dynamic Programmng
• Dynamic Programming is a bottom-up approach
• We solve all possible small problems and then combine them to obtain
solutions for bigger problems.
• This is particularly helpful when the number of copying subproblems is
exponentially large.
• Dynamic Programming is frequently related to Optimization Problems.
Algorithm Design Technique
Branch and Bound Algorithm
• In Branch & Bound algorithm a given subproblem, which cannot be bounded,
has to be divided into at least two new restricted subproblems.
• Branch and Bound algorithm are methods for global optimization in non-
convex problems.
• Branch and Bound algorithms can be slow, however in the worst case they
require effort that grows exponentially with problem size, but in some cases
we are lucky, and the method coverage with much less effort.
Algorithm Design Technique
Randomized Algorithm
A randomized algorithm is defined as an algorithm that is allowed to access a
source of independent, unbiased random bits, and it is then allowed to use
these random bits to influence its computation.
Algorithm Design Technique
Backtracking Algorithm
Backtracking Algorithm tries each possibility until they find the right one.
It is a depth-first search of the set of possible solution.
During the search, if an alternative doesn't work, then backtrack to the choice
point, the place which presented different alternatives, and tries the next
alternative.
Fundamental Analysis of the Algorithm
Efficiency of an algorithm
• The efficiency of an algorithm can be in terms of time and space.
• The algorithm efficiency can be analyzed by the following ways.
a. Analysis Framework.
b. Asymptotic Notations and its properties.
c. Mathematical analysis for Recursive algorithms.
d. Mathematical analysis for Non-recursive algorithms.
Fundamental Analysis of the Algorithm
Algorithm Framework
• There are two kinds of efficiencies to analyze the efficiency of any algorithm.
• Time efficiency, indicating how fast the algorithm runs, and
• Space efficiency, indicating how much extra memory it uses.
• The algorithm analysis framework consists of the following:
Measuring an Input’s Size.
Units for Measuring Running Time
Orders of Growth
Worst-Case, Best-Case, and Average-Case Efficiencies
Fundamental Analysis of the Algorithm
Measuring Running Time
Drawbacks
• Dependence on the speed of a particular computer.
• Dependence on the quality of a program implementing the algorithm.
• The compiler used in generating the machine code.
• The difficulty of clocking the actual running time of the program.
One possible approach is to count the number of times each of the algorithm’s
operations is executed. This approach is excessively difficult.
Fundamental Analysis of the Algorithm
Order of Growth
Fundamental Analysis of the Algorithm
Measuring Running Time
Consider Sequential Search algorithm some search key K ALGORITHM
SequentialSearch(A[0..n - 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n - 1] and a search key K
//Output: The index of the first element in A that matches K or -1 if there are no
// matching elements
i ←0
while i < n and A[i] ≠ K do
i ←i + 1
if i < n
return i
else
return -1
Fundamental Analysis of the Algorithm
Efficiency
Worst-case efficiency
• The worst-case efficiency of an algorithm is its efficiency for the worst case input of size n.
• The algorithm runs the longest among all possible inputs of that size.
• For the input of size n, the running time is Cworst(n) = n.
best-case efficiency
• The best-case efficiency of an algorithm is its efficiency for the best case input of size n.
• The algorithm runs the fastest among all possible inputs of that size n.
• In sequential search, If we search a first element in list of size n. (i.e. first element equal to
• a search key), then the running time is Cbest(n) = 1
Fundamental Analysis of the Algorithm
Efficiency
Averag-case efficiency
• The Average case efficiency lies between best case and worst case.
• To analyze the algorithm’s average case efficiency, we must make some assumptions about possible inputs of size n.
• The standard assumptions are that
• The probability of a successful search is equal to p (0 ≤ p ≤ 1) and
• The probability of the first match occurring in the ith position of the list is the same for every i.
Yet another type of efficiency is called amortized efficiency. It applies not to a single run of Asymptotic notation is a notation,
which is used to take meaningful statement about the efficiency of a program.
Asymptotic Notations
Asymptotic Notation
Asymptotic notation is a notation, which is used to take meaningful statement about the efficiency of a program.
The efficiency analysis framework concentrates on the order of growth of an algorithm’s basic operation count as the
principal indicator of the algorithm’s efficiency.
To compare and rank such orders of growth, computer scientists use three notations, they are:
• O - Big oh notation
• Ω - Big omega notation
• Θ - Big theta notation
Asymptotic Notations
O - Big oh notation
A function f(n) is said to be in O(g(n)), denoted f(n) ∈ O(g(n)) , if f (n) is bounded above
by some constant multiple of g(n) for all large n, i.e., if there exist some positive
constant c and some nonnegative integer n0 such that
t(n) = O.(g((n)))
Where f(n) and g(n) are nonnegative functions defined on the set of natural numbers.
• O = Asymptotic upper bound = Useful for worst case analysis = Loose bound
• It is most frequently used notation and it represents the worst case complexity of
time.
• Upper bound is considered.
• Representing t(n) in term of g(n) = 0 t(n) <= C.(g((n))) where C is constant, n>=k,
k>=0
Asymptotic Notations
O - Big oh notation
Asymptotic Notations
Ω - Big omega notation
A function t(n) is said to be in Ω(g(n)), denoted t(n) ∈ Ω(g(n)), if t(n)
is bounded below by some positive constant multiple of g(n) for all
large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t (n) ≥ cg(n) for all n ≥ n0.
Where t(n) and g(n) are nonnegative functions defined on the set of
natural numbers.
Ω = Asymptotic lower bound = Useful for best case analysis = Loose
bound
Asymptotic Notations
Θ - Big theta notation
A function t(n) is said to be in Θ(g(n)), denoted t(n) ∈ Θ(g(n)), if t(n)
is bounded both above and below by some positive constant
multiples of g(n) for all large n, i.e., if there exist some positive
constants c1 and c2 and some nonnegative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
Where t(n) and g(n) are nonnegative functions defined on the set of
natural numbers.
Θ = Asymptotic tight bound = Useful for average case analysis
Asymptotic Notations
Θ - Big theta notation
Asymptotic Notations Properties
Mathematical Analysis for Recursive Algorithm
General Plan for Analyzing the Time Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2.Identify the algorithm’s basic operation.
3.Check whether the number of times the basic operation is executed
can vary on different inputs of the same size; if it can, the worst-case,
average-case, and best-case efficiencies must be investigated
separately.
4.Set up a recurrence relation, with an appropriate initial condition, for
the number of times the basic operation is executed.
5.Solve the recurrence or, at least, ascertain the order of growth of its
solution.
EXAMPLE 1: Compute the factorial function F(n) = n! for an arbitrary
nonnegative integer n.Since n!= 1•. . . . • (n − 1) • n = (n − 1)! • n, for n ≥ 1 and 0!= 1
by definition, we can compute F(n) = F(n − 1) • n with the following recursive
algorithm.
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F(n − 1) * n
Recursive Algorithm Analysis
Algorithm analysis
1. For simplicity, we consider n itself as an indicator of this algorithm’s
input size. i.e. 1.
2.The basic operation of the algorithm is multiplication, whose number
of executions we denote M(n).
3.Since the function F(n) is computed according to the formula F(n) =
F(n −1)•n for n > 0.
4.The number of multiplications M(n) needed to compute it must
satisfy the equality
M(n) = M(n-1) + 1 for n > 0
To compute F(n-1) To multiply F(n-1) by n
5. M(n − 1) multiplications are spent to compute F(n − 1), and one more
Recursive Algorithm Analysis
Recursive Algorithm Analysis
Recursive Algorithm Analysis
Recursive Algorithm Analysis
Recursive Algorithm Analysis
Recursive Algorithm Analysis
Thank you for
participating!
The mind is just like a muscle
— the more you exercise it,
the stronger it gets and the
more it can expand.