Objective: Analysis and Design of Algorithms
Objective: Analysis and Design of Algorithms
Introduction
Objective
Algorithm is the core of computer science. Computer programs
would not exist
without algorithm. Reason for studying algorithm is in developing
analytical skills
Algorithm .
An algorithm is a sequence of unambiguous instructions for obtaining a
required output for any legitimate input in finite amount of time.
The name comes after Persian mathematician “Abu Jafer Mohammed Ibn
Musa Alkhowariznie”.
Criteria
An algorithm can take zero or more inputs.
It should give one or more outputs.
Definiteness: Each instruction should be clear and unambiguous.
Finiteness: All cases must terminate in finite number of steps.
Effectiveness: It must be feasible.
Note:
Program is the expression of an algorithm in a programming
language.
Debugging is a process of executing program on sample data sets to
determine whether faulty results occur. It checks for the presence of
error not for the absence.
Profiting : Executing the correct program and measure time and
space complexities.
Page 1
Analysis and Design of Algorithms
Notion of Algorithm
The non ambiguity requirement for each step of an algorithm can not
be compromised.
Algorithm for the same problem can be based on very different ideas
and can solve the problem with dramatically different speeds.
ALGORITHM Euclid(m,n)
Page 2
Analysis and Design of Algorithms
For example, for numbers60 and 24 the algorithm will try d=frirst 24,hen 23
and so on until it reaches 12, where it stops.
Page 3
Analysis and Design of Algorithms
For p← 2 to n do
a[p]←p
For p← 2 to └ √n ┘do
if a[p]≠0
j←p*p
while j≤n do
a[ j]←0
j←j+p
// copy remaining elements of A to L.
i←0
For p← 2 to n do
If a[p]≠0
L[i]←A[p]
I←i+1
Return L.
Page 4
Analysis and Design of Algorithms
Page 5
Analysis and Design of Algorithms
Page 6
Analysis and Design of Algorithms
Analyzing an Algorithm:
Time efficiency: How fast the algorithm runs.
Space efficiency: How much extra space the algorithm needs.
Simplicity: Simpler algorithms are easier to understand.
Generality: It is easier to design an algorithm for a problem prosed in
more general terms.
Coding an algorithm:
Good Algorithm is a result of repeated effort and rework
Most of the algorithms are destined to be implemented on computer
programs.
Not only implementation but also the optimization of the code is
necessary. This increases the speed of operation.
A working program provides the opportunity in allowing empirical analysis
of the underlying program.
Page 7
Analysis and Design of Algorithms
1. Sorting
2. Searching
3. String Processing
4. Graph Problems
5. Combinational Problems
6. Geometric Problems
7. Numerical Problems
Sorting:
It refers to rearranging the items of a given list in ascending order. Ex:
We may need to sort numbers, characters, strings, records etc.
We need to choose a piece of information to be ordered. This number is
called a key.
The important use of sorting is searching.
There are many algorithms for sorting. Although some algorithms are
indeed better than others but there is no algorithm that would be the best
solution in all situations.
A sorting algorithm is called stable if it preserves the relative order of
any two equal elements in its input.
An Algorithm is said to be in place if it does not require extra memory,
except possibly for a few memory units.
Searching:
It deals with finding a given value called search key, in a given set.
There are several algorithms ranging from sequential search to binary
search.
Some algorithms are based on representing the underlying set in a
different form more conductive to searching. They are used in large
databases.
Some algorithms work faster than others but require more memory.
Some are very fast only in sorted arrays.
Page 8
Analysis and Design of Algorithms
String Processing:
A string is a sequence of characters.
String processing algorithms have been important for computer science
for a long time in conjunction with computer languages and compiling
issues.
String matching is one kind of such problem.
Graph Problems
A graph can be thought of as a collection of points called vertices, some
of which are connected by line segments called edges.
They can be use for modeling wide variety of real life applications.
Basic graph algorithm includes graph traversal algorithms, shortest
path algorithms and topological sorting for graphs with directed
edges.
Combinatorial Problems:
These problems ask to find a combinatorial object such as a permutation,
a combination, or a subset – that satisfies certain constraints and has
some desired property.
These are the most difficult problems.
Because- the number of combinatorial objects grows extremely fast with a
problem’s size racing unimaginable magnitude even for moderate sized
instances.
There are no algorithms for solving such problems exactly in an
acceptable amount of time.
Geometric problems:
They deal with geometric objects such as points, lines, and polygons.
These algorithms are used in developing applications for Computer
Page 9
Analysis and Design of Algorithms
Numerical Problems:
These are the problems that involve mathematical objects of continuous
nature:
Solving equations, system of equations, computing definite integrals,
evaluating functions and so on.
The majority of such problems can be solved only approximately.
Such problems require manipulating real numbers, which can be
represented in computer only approximately.
Large number of arithmetic operation leads to round off error which can
drastically distort the output.
Fundamental Data Structures
1. Arrays
2. Linked Lists.
Arrays:
An array is a sequence of n items of the same data type that are stored
contiguously in computer memory and made accessible by specifying the
values of the array index. They are generally used to store strings.
(Character strings or binary strings (bit strings).
In arrays, access time is constant regardless where in the array the element
is where as in linked lists access time varies depending on the position of the
element in the list.
Page 10
Analysis and Design of Algorithms
Array of n elements
Linked List:
Two Special types of lists are Stacks and Queues particularly important.
A stack is a list in which insertion and deletion can be done only from one
side called top of stack.(LIFO).
A queue is a list in which insertion is done from one end called the rear and
deletion is done from the other end called front.
If the pair of vertices are ordered (Ex. Vertices (u,v) is directed from u to v )
then the graph is called directed.
Page 11
Analysis and Design of Algorithms
Graph representations:
a c b
d e f
Adjacency linked lists:
It’s a collection of linked lists one for each vertex, that contain all the
vertices adjacent to the list’s vertex. Usually such lists starts with a header
identifying a vertex for which the list is compiled.
If a graph is Sparse, the adjacency linked list representation may use less
space than corresponding adjacency matrix despite of extra storage
Page 12
a c b
d f
Analysis and Design of Algorithms
Weighted graph:
It’s a graph with numbers assigned to its edges. These numbers are
called as weights. With this many real life applications can be represented.
If all the edges of a path are distinct, the path is called simple path.
Page 13
Analysis and Design of Algorithms
Directed path:
A sequence of vertices in which every consecutive pair of the vertices is
connected by an edge directed from the vertex listed first to the vertex listed
next.
Connected Graph:
A graph is said to be connected if for every pair of its vertices u and v there
is a path from u to v.
Cycle:
It is a simple path of a positive length that starts and ends at the same
vertex.
A graph with no cycle is said to be acyclic.
Tree:
It’s a connected acyclic graph.
Forest:
It’s a graph that has no cycles but not necessarily connected.
Rooted Tree:
In a tree it is possible to select an arbitrary vertex and consider is as the
root. Such tree is called as rooted tree.
Page 14
Analysis and Design of Algorithms
Depth of a vertex v is the length of the simple path from the root to v.
Height of the tree is the length of the longest simple path.
For any vertex v in a tree T, all the vertices on the simple path from the root
to that vertex are called ancestors of v.
If (u, v) is the last edge of the simple path from the root to vertex v (and u /=
i>), u is said to be the parent of v and v is called a child of u.
Vertices that have the same parent are said to be siblings.
A vertex with no children is called a leaf.
Vertex with at least one child is called parental.
All the vertices for which a vertex v is an ancestor are said to be
descendants of v.
A vertex v with all its descendants is called the subtree of T rooted at that
vertex.
Ordered Tree:
An ordered tree is a rooted tree in which all the children of each vertex are
ordered. It is convenient to assume that in a tree's diagram, all the children
are ordered left to right.
Binary Tree:
A binary tree can be defined as an ordered tree in which every vertex has
no more than two children and each child is designated as either a left child
Page 15
Analysis and Design of Algorithms
or a right child of its parent. The subtree with its root at the left (right) child
of a vertex is called the left (right) subtree of that vertex.
Computer Representation:
Page 16
Analysis and Design of Algorithms
(a) First child-next sibling representation of the above tree. (b) Its binary tree
representation.
Ex: S = {2,3,5,7})
Computer Implementation:
2 methods:
Page 17
Analysis and Design of Algorithms
1. The first considers only sets that are subsets of some large set U called
the universal set. If set U has n elements, then any subset S of U can
be represented by a bit string of size n, called a bit vector.
a. U = {1,2,3,4,5,6,7,8,9}, then
S = {2, 3, 5, 7}
2. Using linked list structure to indicate the sets elements. His can be
only used for finite sets.
Multiset:
An unordered collection of items that are not necessarily distinct.
Dictionary:
A Data structure that implements – searching of a given item, adding a new
item, and deleting an item, is called as Dictionary.
Questions:
1. Explain the term Algorithm, With its properties. Give one example.
a) Binary Tree
d) Siblings
Page 18
Analysis and Design of Algorithms
e) Forest
Issues
Simplicity
Page 19
Analysis and Design of Algorithms
Generality
Space requirements
Time requirements
Space efficiency: It deals with the extra space the algorithm requires.
b = [log 2 n] + 1.
There are drawback in this approach, the running time depends on the
Page 20
Analysis and Design of Algorithms
The basic operation contributes most to the running time of the algorithm.
Orders of growth:
By using small inputs we cannot distinguish the efficient
algorithm from the inefficient ones.
Page 21
Analysis and Design of Algorithms
On the other end the exponential and the factorial function grows
fast.
Efficiencies.
There are many algorithm, for which running time depends not only on
input size but also with particular data items in the input.
i ←0
i← i+1
if i<n return i
else return -1
Worst-case efficiency:
The worst-case efficiency of an algorithm is its efficiency for the worst
case input of size n, which is an input (or inputs) of size n for which the
algorithm runs the longest among all possible inputs of that size.
We analyze the algorithm to see what kind of inputs yield the largest
value of basic operation’s count c(n) among all possible inputs of size n
and compute this worst case value.
Page 22
Analysis and Design of Algorithms
Cworst (n) = n.
It guarantees that for any instance of size n, the running time will not
exceed Cworst (n) .
Best-case efficiency:
The best-case efficiency of an algorithm is its efficiency for the
best case input of size n, which is an input (or inputs) of size n
for which the algorithm runs the fastest among all possible
inputs of that size.
Cbest (n) = 1.
Note: The best case doesn’t mean the small input; it means the input of size n for
which the algorithm runs the fastest.
The analysis of the best-case efficiency is not nearly as important as that of the
worst-case efficiency.
Standard assumptions
The probability of a successful search is = p (0≤p≤1) and probability of first
match in ith position is same for every i i.e. p/n.
=p/n[1+2+……..+n]+n[1-p]
Page 23
Analysis and Design of Algorithms
= p(n+1)/2 + n(1-p)
Amortized efficiency:
It applies not to a single run of an algorithm but rather to a sequence
of operations performed on the same data structure.
O-notation:
A function t(n) is said to be in 0(g(n)), denoted t(n) € 0(g(n)), if
t(n) is bounded above by some constant multiple of g(n) for all
large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
= 101n
≤ 101n2
Thus for the same we can choose other value for constants c and n 0.
For example
Page 24
Analysis and Design of Algorithms
Thus we have:
Page 25
Analysis and Design of Algorithms
Θ-notation:
A function t(n) is said to be in Θ (g(n)), denoted t(n) € Θ(g(n), if t(n) is
bounded both above and below by some positive constant multiples of g(n)
for all large n, i.e., if there exist some positive constant c1 and c2 and some
nonnegative integer n0 such that
Page 26
Analysis and Design of Algorithms
Page 27
Analysis and Design of Algorithms
Proof:
let us denote c3= max(c1, c2) so that we can use both inequalities . Adding
the two unequalities above yields the following.
1 log n n nlogn n2 n3 2n n!
Page 28
Analysis and Design of Algorithms
Quick
sort
4. Find out C(n) [the number of times the algorithm's basic operation is
executed.]
Algorithm MaxMin(A[0……n-1])
// Compares consecutive pairs of elements and the compares the larger one
with
// current maximum and the smaller one with the current minimum.
Page 29
Analysis and Design of Algorithms
minval←0 maxval←0
if A[i]≤A[i+1]
if A[i]<minval
minval←A[i]
else
if A[i]>maxval
maxval←A[i]
Analysis:
No of inputs: n elements.
No of comparisons required:
At any time only one block either ‘if’ block or ‘else’ block will be executed.
Thus for
Therefore,
C (n) = (n-1) x 2
= 2n-2.
€ Θ(n)
Page 30
Analysis and Design of Algorithms
//Returns "false“.
for i ← 0 to n — 2 do
for j ← i + 1 to n - 1 do
if A[i] = A [j]
return false
return true
Cworst(n) = n2.
Matrix multiplication:
Algorithm matrixmultiplication(A[0..n-1,0..n-1], B[0..n-1,0..n-1])
//algorithm
c[I,j] ←0.0
c[I,j] ←c[I,j]+A[I,k]*B[k,j]
return c
Page 31
Analysis and Design of Algorithms
∑ ∑ ∑ 1
I=0 j=0 k=0
∑ ∑ n =>∑ n2 =>n
3
Cworst(n) = n3.
Page 32
Analysis and Design of Algorithms
ALGORITHM F(n)
//Computes n! recursively.
If n=0 return 1
Recurrence relations
History of Fibonacci:
It s introduced by Leonardo Fibonacci in 1202 as a solution to the
problem of a rabbit population.
Algorithm F(n)
Page 33
Analysis and Design of Algorithms
If n≤1 return 1
Analysis:
Let A(n) be the number of additions needed for computing F(n-1) and F(n-2)
are A(n-1) and A(n-2) and one more addition to compute their sum.
A(0)=0, A(1)=1.
[A(n)+1]-[A(n-1)+1]-[A(n-2)+1]=0
B(0)=0, B(1)=1.
But we can observe that B(n) is the recurrence as F(n) except that it
starts with 2 ones.
B(n)=F(n+1)
Page 34
Analysis and Design of Algorithms
= F(n+1)-1
n+1 n+1
Fn = 1 (1+√5) - 1 1-√5 -1
√5 2 √5 2
F(4)
F(3) F(2)
F(1) F(0)
Tre of recursive calls for computing the Fibonacci number for n=5.
Algorithm Fib(n)
F[0]←0; F[1]←1
For i←2 to n do
Page 35
Analysis and Design of Algorithms
F[i]←F[i-1]+F[i-2]
Return F[n]
Analysis
Input size:n
Questions
1) Write an Algorithm for sequential search. Discuss Worst case, best
case and average case efficiencies.
2) Discuss the three Asymptotic Notation:
3) List the Basic Asymptotic Efficiency Classes
4) With example discuss the Mathematical Analysis of a Nonrecursive
Algorithm:
5) With example discuss the Mathematical Analysis of a Recursive
Algorithm:
Page 36
Analysis and Design of Algorithms
Brute Force
Brute force method:
It is a straight forward approach to solving a problem usually directly
based on the problems statement and definitions of the concepts
involved.
Ex:
1. Computing an
2. Computing n!
3. Sequential search
Page 37
Analysis and Design of Algorithms
Selection Sort
1) We start by scanning the entire list to find the smallest element and
exchange with the first element.
2) We put the smallest element in the final position of the sorted list .i.e.
a[0]
3) Then we scan the list starting with the second element, to find the
smallest among the n-1 elements and exchange it with the second
element.
4) We put the second smallest element in its final position .i.e. a[1].
5) On the ith pass which we number from 0 to n-2, the algorithm searches
for the smallest item among the last n-i elements and swaps it with a i .
Algorithm SelectionSort(A[0….n-1])
min←i
Page 38
Analysis and Design of Algorithms
if A[j]<A[min]
min←j
Analysis:
Ex: To sort 10 5 2 0 4
Index 0 1 2 3 4
10 5 2 0 4 i=0
10 5 2 0 4 min 0 j 1
10 5 2 0 4 min 1 j 2
10 5 2 0 4 min 2 j 3
10 5 2 0 4 min 3 j 4 (swap
as 4>0)
00 5 2 10 4 i=1 min 1 j 2
0 5 2 10 4 i=1 min 2 j 3(swap as 10
>2)
Page 39
Analysis and Design of Algorithms
0 2 5 10 4 i=2 min 2 j 3
0 2 5 10 4 i=2 min 2 j
4(swap as 5>4)
0 2 4 10 5 i=3 min 3 j
4(swap as 10>5)
0 2 4 5 10
Bubble Sort
1) In this technique the two successive items a[j] and a[j+1] exchanged
whenever a[j]>a[j+1].
3) The next pass bubbles up the second largest element, and so on until,
after n-1 passes, the list is sorted.
ALGORITHM BubbleSort(A[0…n-1])
if A[j+1]<A[j]
Page 40
Analysis and Design of Algorithms
A[1]=50 50 30 30 30 40 20 20 30 10 20
A[2]=30 30 50 20 20 20 40 10 10 30 30
A[3]=20 20 20 50 10 10 10 40 40 40 40
A[4]=10 10 10 10 50 50 50 50 50 50 50
The number of key comparisons for the bubble sort is same for all array of
size n.
n-2 n-2-i n-2
∑ ∑ 1 = ∑[(n-2-i)-0+1]
i=0 j=0 i=0
n-2
If a pass through the list makes no changes, the list has been sorted
and we can stop the algorithm.
Though the new version runs faster on some inputs. It is still Θ (n2) in
the worst and average cases. In the best case it is n.
ALGORITHM BubbleSort(A[0…n-1])
Page 41
Analysis and Design of Algorithms
count 0
count count+1
if count 0
return
Sequential Search
The algorithm simply compares successive elements of a given list
with a given search key until either a match is encountered(successful
search) or the list is exhausted without finding a match(unsuccessful search).
If we append the search key to the end of the list, the search for the
key will have to be successful, and therefore we can eliminate a check for
the list’s end on each iteration of the algorithm.
//Output: if found, returns the position where the element found else returns
-1.
A[n]← k
i=0;
while A[i]≠ k do
i=i+1
if i< n return I
Page 42
Analysis and Design of Algorithms
else return -1
Matrix Multiplication
//Multiplication of 2 nxn matrices
//Output: C=A*B
C[i,j] ←0
return C.
String Matching
Here we have given a string of n characters called the text and a
string of m characters(m≤n) called pattern, find a substring of the text
that matches the pattern.
j←0
Page 43
Analysis and Design of Algorithms
j←j+1
if j=m
return i
return -1
Solving problem
1. Align the pattern against the first m characters of the text and start
matching the corresponding pairs of characters from left to right until
either all the m pairs of the characters match( then the algorithm can
stop) or a mismatching pair is encountered.
2. In the next case the pattern is shifted one position to the right and
character comparisons are resumed.
3. Starting again with the first character of the pattern and its
counterpart in the text.
Page 44
Analysis and Design of Algorithms
In this example we can see that the algorithm shifts the pattern almost
always after a single character comparison
He worst case may be when the algorithm may have to make all m
comparison before shifting the pattern, and this can happen for each of the
n-m+1 tries.
∑ ∑ 1 = ∑m-0+1
i=0 j=0 i=0
n-m n-m
∑m + ∑ 1
i=0 i=0
=>m(n-m+1)+(n-m+1)
Page 45
Analysis and Design of Algorithms
=>(m+1) (n-m+1)
=> m2-mn-n-1
Questions
1.. Explain brute force method. Write an algorithm to sort an array using
brute force method. Analyze its efficiency.
Page 46