0% found this document useful (0 votes)
3 views46 pages

Objective: Analysis and Design of Algorithms

The document provides a comprehensive overview of algorithms, their design, and analysis, emphasizing their importance in computer science. It discusses various types of algorithms, including those for computing the greatest common divisor, sorting, searching, and graph problems, while also covering fundamental data structures like arrays and linked lists. Additionally, it highlights the criteria for effective algorithms, methods of specification, and the significance of algorithm correctness and efficiency.

Uploaded by

swatibr.bnps
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views46 pages

Objective: Analysis and Design of Algorithms

The document provides a comprehensive overview of algorithms, their design, and analysis, emphasizing their importance in computer science. It discusses various types of algorithms, including those for computing the greatest common divisor, sorting, searching, and graph problems, while also covering fundamental data structures like arrays and linked lists. Additionally, it highlights the criteria for effective algorithms, methods of specification, and the significance of algorithm correctness and efficiency.

Uploaded by

swatibr.bnps
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 46

Analysis and Design of Algorithms

Introduction
Objective
Algorithm is the core of computer science. Computer programs
would not exist
without algorithm. Reason for studying algorithm is in developing
analytical skills
Algorithm .
An algorithm is a sequence of unambiguous instructions for obtaining a
required output for any legitimate input in finite amount of time.
The name comes after Persian mathematician “Abu Jafer Mohammed Ibn
Musa Alkhowariznie”.

Criteria
 An algorithm can take zero or more inputs.
 It should give one or more outputs.
 Definiteness: Each instruction should be clear and unambiguous.
 Finiteness: All cases must terminate in finite number of steps.
 Effectiveness: It must be feasible.

Note:
 Program is the expression of an algorithm in a programming
language.
 Debugging is a process of executing program on sample data sets to
determine whether faulty results occur. It checks for the presence of
error not for the absence.
 Profiting : Executing the correct program and measure time and
space complexities.

Page 1
Analysis and Design of Algorithms

Notion of Algorithm

 The non ambiguity requirement for each step of an algorithm can not
be compromised.

 The range of inputs for which an algorithm works has to be specified


carefully.

 The same algorithm can be represented in several different ways.

 Several algorithms for solving the same problem may exist.

 Algorithm for the same problem can be based on very different ideas
and can solve the problem with dramatically different speeds.

Euclid’s algorithm for computing gcd (m, n):


 If n=0 return the value of m as the answer and stop else proceed to
step 2.
 Divide m by n and assign the value of the remainder to r.
 Assign the value of n to m and the value of r to n. Go to step 1.

ALGORITHM Euclid(m,n)

Page 2
Analysis and Design of Algorithms

// Computes gcd (m,n) by Euclid’s Algorithm.


// Input : Two non negative, not both zero integers m & n.
// Output : gcd of m &n.
while n≠0 do
r←m mod n
m←n
n←r
return m.

For example, gcd(60,24)=gcd(24,12)=gcd(12,0)=12

Recursive Integer Checking Algorithm:


Step 1. Assign the value of min {m, n} to t.
Step 2. Divide m by t.
If the remainder is zero go to Step 3. Otherwise go to step 4.
Step 3. Divide n by t.
If the remainder is zero then gcd=t and Stop. Otherwise go to
step 4
Step 4. Decrement the value of t by 1 and go to Step 2.
Input Specification: Both numbers must be nonzero.

For example, for numbers60 and 24 the algorithm will try d=frirst 24,hen 23
and so on until it reaches 12, where it stops.

School Procedure to find out gcd (m,n):


Step 1. Find the prime factor of m.
Step 2. Find the prime numbers of n.
Step 3. Indentify the common factors in the 2 prime expansions. [If p is a
common factor appearing pm and pn times in m and n then it
should be repeated min (pm, pn) times.
Step 4. Gcd = product of all common factors. Return gcd.

Page 3
Analysis and Design of Algorithms

Algorithm to generate prime numbers up to a given integer n.


ALGORITHM Sieve of Eratostenes(n)
// INPUT : A positive integer n>=2
//OUTPUT : Array of all prime numbers <=n.

For p← 2 to n do
a[p]←p
For p← 2 to └ √n ┘do
if a[p]≠0
j←p*p
while j≤n do
a[ j]←0
j←j+p
// copy remaining elements of A to L.
i←0
For p← 2 to n do
If a[p]≠0
L[i]←A[p]
I←i+1
Return L.

Thus for the numbers 60 and 24, we get


60= 2*2*3*5
24= 2*2*2*3
gcd(60,24)=2*2*3=12.

Page 4
Analysis and Design of Algorithms

Fundamentals of Algorithmic Problem Solving

Understand the problem:


 Read the problem description carefully and ask questions if u have any
doubt about the problem, do a small examples by hand, think about
special cases, and ask questions again if needed.
 It is very important to specify exactly the range of inputs of an
algorithm needs to handle.

Ascertaining the capabilities of a Computational device:


 Most of the algorithms today are destined to be programmed for a
computer closely resembling the von Neumann Machine architecture

Page 5
Analysis and Design of Algorithms

where instructions are executed sequentially. Such algorithms are called


Sequential algorithms.
 Some newer computer can execute operations parallel. The algorithms
that take advantage of this capability are called Parallel algorithms.

Choosing between the Exact and Approximate Problem solving:


 Problem can be solved exactly or approximately.
 Sometimes it is necessary to opt for approximation algorithm because,
 There are some problems which can not be solved exactly like finding
square roots.
 Algorithm for solving a problem exactly can be unacceptably slow
because of problem’s intrinsic complexity like TSP.

Deciding appropriate data structure:


 Some algorithms do not demand any ingenuity in representing their
inputs.
 But some of the algorithm designing techniques depends intimately on
structuring data specifying a problem instance.
 In object oriented programming data structures remain crucially
important for both design and analysis of algorithms.

Algorithm design techniques:


 It is a general approach to solving a problem algorithmically that is
applicable to a variety of problems from different areas of computing.
 They provide guidance for designing algorithms for new problems.
 Algorithm design technique makes it possible to classify algorithms
according to an underlying idea.

Methods of Specifying an Algorithm:


 Using Natural Language: This method leads to ambiguity. Clear
description of algorithm is difficult.
 Using Pseudo codes: It is a mixture of natural language and

Page 6
Analysis and Design of Algorithms

programming language like constructs. It is more precise than a natural


language.
 Using Flow Charts: It is a method of expressing an algorithm by a
collection of connected geometric shapes containing the description of
algorithm.

Proving algorithms correctness:


 We have to prove that the algorithm yields required result for every
legitimate input in a finite amount of time.
 For some algorithms, a proof of correctness is quite easy ; for others, it
can be quite complex.

 An algorithm is incorrect if it fails for one instance of input, then you


need to redesign the whole problem.

Analyzing an Algorithm:
 Time efficiency: How fast the algorithm runs.
 Space efficiency: How much extra space the algorithm needs.
 Simplicity: Simpler algorithms are easier to understand.
 Generality: It is easier to design an algorithm for a problem prosed in
more general terms.

Coding an algorithm:
 Good Algorithm is a result of repeated effort and rework
 Most of the algorithms are destined to be implemented on computer
programs.
 Not only implementation but also the optimization of the code is
necessary. This increases the speed of operation.
 A working program provides the opportunity in allowing empirical analysis
of the underlying program.

Important problem types

Page 7
Analysis and Design of Algorithms

1. Sorting
2. Searching
3. String Processing
4. Graph Problems
5. Combinational Problems
6. Geometric Problems
7. Numerical Problems

Sorting:
 It refers to rearranging the items of a given list in ascending order. Ex:
We may need to sort numbers, characters, strings, records etc.
 We need to choose a piece of information to be ordered. This number is
called a key.
 The important use of sorting is searching.
 There are many algorithms for sorting. Although some algorithms are
indeed better than others but there is no algorithm that would be the best
solution in all situations.
 A sorting algorithm is called stable if it preserves the relative order of
any two equal elements in its input.
 An Algorithm is said to be in place if it does not require extra memory,
except possibly for a few memory units.

Searching:
 It deals with finding a given value called search key, in a given set.
 There are several algorithms ranging from sequential search to binary
search.
 Some algorithms are based on representing the underlying set in a
different form more conductive to searching. They are used in large
databases.
 Some algorithms work faster than others but require more memory.
 Some are very fast only in sorted arrays.

Page 8
Analysis and Design of Algorithms

String Processing:
 A string is a sequence of characters.
 String processing algorithms have been important for computer science
for a long time in conjunction with computer languages and compiling
issues.
 String matching is one kind of such problem.

Graph Problems
 A graph can be thought of as a collection of points called vertices, some
of which are connected by line segments called edges.
 They can be use for modeling wide variety of real life applications.
 Basic graph algorithm includes graph traversal algorithms, shortest
path algorithms and topological sorting for graphs with directed
edges.

Combinatorial Problems:
 These problems ask to find a combinatorial object such as a permutation,
a combination, or a subset – that satisfies certain constraints and has
some desired property.
 These are the most difficult problems.
 Because- the number of combinatorial objects grows extremely fast with a
problem’s size racing unimaginable magnitude even for moderate sized
instances.
 There are no algorithms for solving such problems exactly in an
acceptable amount of time.
Geometric problems:
 They deal with geometric objects such as points, lines, and polygons.
 These algorithms are used in developing applications for Computer

Page 9
Analysis and Design of Algorithms

graphics, robotics and tomography.

Numerical Problems:
 These are the problems that involve mathematical objects of continuous
nature:
 Solving equations, system of equations, computing definite integrals,
evaluating functions and so on.
 The majority of such problems can be solved only approximately.
 Such problems require manipulating real numbers, which can be
represented in computer only approximately.
 Large number of arithmetic operation leads to round off error which can
drastically distort the output.
Fundamental Data Structures

A data structure can be defined as a particular scheme of organizing related


data items.

Linear Data Structures:

1. Arrays
2. Linked Lists.

Arrays:

An array is a sequence of n items of the same data type that are stored
contiguously in computer memory and made accessible by specifying the
values of the array index. They are generally used to store strings.
(Character strings or binary strings (bit strings).

In arrays, access time is constant regardless where in the array the element
is where as in linked lists access time varies depending on the position of the
element in the list.

Item[0] item[1] ………………… item[n-1]

Page 10
Analysis and Design of Algorithms

Array of n elements

Linked List:

Singly Linked List : It is a sequence of zero or more elements called nodes


each containing two kinds of information: some data and one or more links
called pointers to other nodes of the linked list. A special node called ‘null’ is
used to indicate the absence of next node.

Item 0 Item 1 Item n-1 null


-------

Singly linked list of n elements


Doubly linked list : Here every node except the first and last, contains
pointer to both its successor and its predecessor.

null item 0 Item 1 Item n-1 null


-------
Doubly linked list of n elements

Two Special types of lists are Stacks and Queues particularly important.

A stack is a list in which insertion and deletion can be done only from one
side called top of stack.(LIFO).

A queue is a list in which insertion is done from one end called the rear and
deletion is done from the other end called front.

Graphs : A graph G=(V,E) is defined by a pair of two sets: a finite set V of


items called vertices and asset E of pairs of these items called edges.

If the pair of vertices is unordered then the graph is undirected. (Ex:


vertices (u,v) is as same as (v,u)).

If the pair of vertices are ordered (Ex. Vertices (u,v) is directed from u to v )
then the graph is called directed.

Directed graphs also called as Digraphs.

Page 11
Analysis and Design of Algorithms

Loops: Edges connecting vertices to themselves.

Complete graph: A graph with every pair of its vertices connected by an


edge is called complete graph.

A graph with relatively few possible edges missing is called dense. A


graph with few edges relative to number of its vertices is called
sparse.

Graph representations:

1. The adjacency matrix:


2. Adjacency linked lists:
The adjacency matrix:

The adjacency matrix of a graph with n vertices is an n by n Boolean matrix


with one row and column for each of the graph’s vertices in which the
element in the ith row and jth column is equal to 1 if there is an edge from the
ith vertex to jth vertex and equal to zero if there is no edge.

a c b

d e f
Adjacency linked lists:

It’s a collection of linked lists one for each vertex, that contain all the
vertices adjacent to the list’s vertex. Usually such lists starts with a header
identifying a vertex for which the list is compiled.

If a graph is Sparse, the adjacency linked list representation may use less
space than corresponding adjacency matrix despite of extra storage

Page 12
a c b
d f
Analysis and Design of Algorithms

consumed by the pointers of linked lists. The situation is exactly opposite in


Dense graphs.

Weighted graph:

It’s a graph with numbers assigned to its edges. These numbers are
called as weights. With this many real life applications can be represented.

Ex: Shortest path between two paths in transportation network.

Paths and Cycles:

A path from vertex u to vertex v of a graph can be defined as a sequence of


adjacent vertices that starts with u and ends with v.

If all the edges of a path are distinct, the path is called simple path.

Page 13
Analysis and Design of Algorithms

The length of a path is the total number of vertices in a vertex sequence


defining that path minus one which is same as no of edges in the path.

Directed path:
A sequence of vertices in which every consecutive pair of the vertices is
connected by an edge directed from the vertex listed first to the vertex listed
next.

Connected Graph:
A graph is said to be connected if for every pair of its vertices u and v there
is a path from u to v.

Cycle:
It is a simple path of a positive length that starts and ends at the same
vertex.
A graph with no cycle is said to be acyclic.

Tree:
It’s a connected acyclic graph.

Forest:
It’s a graph that has no cycles but not necessarily connected.

Rooted Tree:
In a tree it is possible to select an arbitrary vertex and consider is as the
root. Such tree is called as rooted tree.

Page 14
Analysis and Design of Algorithms

(a) Free tree. (b) Its transformation into a rooted tree.

It is depicted by placing root on the top.

 Depth of a vertex v is the length of the simple path from the root to v.
Height of the tree is the length of the longest simple path.
 For any vertex v in a tree T, all the vertices on the simple path from the root
to that vertex are called ancestors of v.
 If (u, v) is the last edge of the simple path from the root to vertex v (and u /=
i>), u is said to be the parent of v and v is called a child of u.
 Vertices that have the same parent are said to be siblings.
 A vertex with no children is called a leaf.
 Vertex with at least one child is called parental.
 All the vertices for which a vertex v is an ancestor are said to be
descendants of v.
 A vertex v with all its descendants is called the subtree of T rooted at that
vertex.

Ordered Tree:

An ordered tree is a rooted tree in which all the children of each vertex are
ordered. It is convenient to assume that in a tree's diagram, all the children
are ordered left to right.

Binary Tree:
A binary tree can be defined as an ordered tree in which every vertex has
no more than two children and each child is designated as either a left child

Page 15
Analysis and Design of Algorithms

or a right child of its parent. The subtree with its root at the left (right) child
of a vertex is called the left (right) subtree of that vertex.

Binary Search Tree:


In a binary tree a number is assigned to each parental vertex. If is larger
than all the numbers in its left subtree and smaller than all the numbers in
its right subtree then such trees are called binary search trees.

Computer Representation:

First child-next sibling representation

Page 16
Analysis and Design of Algorithms

(a) First child-next sibling representation of the above tree. (b) Its binary tree
representation.

Sets and Dictionaries:

A set can be described as an unordered collection (possibly empty) of distinct


items called elements of the set.

Ex: S = {2,3,5,7})

Computer Implementation:

2 methods:

Page 17
Analysis and Design of Algorithms

1. The first considers only sets that are subsets of some large set U called
the universal set. If set U has n elements, then any subset S of U can
be represented by a bit string of size n, called a bit vector.
a. U = {1,2,3,4,5,6,7,8,9}, then

S = {2, 3, 5, 7}

Bit String 011010100.

2. Using linked list structure to indicate the sets elements. His can be
only used for finite sets.

Multiset:
An unordered collection of items that are not necessarily distinct.

Dictionary:
A Data structure that implements – searching of a given item, adding a new
item, and deleting an item, is called as Dictionary.

Abstract Data Types:


A set of abstract objects representing data items with a collection of
operations that can be performed on them.

Questions:
1. Explain the term Algorithm, With its properties. Give one example.

2. Expalin each step in Fundamentals of Algorithm Problem Solving.

3. What are the important problem types in ADA? Explain them.

4. Explain the terms with one example--

a) Binary Tree

b) Complete Binary Tree

c) Binary Search Tree

d) Siblings

Page 18
Analysis and Design of Algorithms

e) Forest

5. When do you call an algorithm is stable and algorithm is in place?

Analysis of Algorithm Efficiency


Objective
Analysis of algorithm is referred as investigation of an
algorithm with respect to running time and memory space. As
there are many method to solve a problem it is necessary to
know which method is efficient. To determine the efficiency we
need to know the three notations 0 (“big oh”), Ω (“big
omega”), Θ (“big theta”).

Issues
 Simplicity

Page 19
Analysis and Design of Algorithms

 Generality

 Space requirements

 Time requirements

 Determining Time Complexity is very important.

Analysis frame work


There are two kinds of efficiency

Time efficiency: It indicates how fast an algorithm in question runs.

Space efficiency: It deals with the extra space the algorithm requires.

Now a day the amount of extra space required by an algorithm is


typically not of much concern. But the time issue has not diminished quite to
the same extent. Therefore most of the algorithm text book concentrates on
time efficiency.

Measuring an input size


Almost all algorithms run longer on larger inputs. For sorting larger
arrays, multiply larger element matrix it takes longer time. Therefore it is
logical to find efficiency as a function of parameter n where n is the input
size. It holds in the case of sorting, searching, and finding list’s smallest
element.

The same is in the case of spell checking algorithm, where we have to


check each character which depends on the number of character present.

For such algorithms, computer scientists prefer measuring size by the


number b of bits in the n’s binary representation:

b = [log 2 n] + 1.

Units of measuring Running Time


 Use physical unit of time i.e. second, Milliseconds.

 There are drawback in this approach, the running time depends on the

o Compiler in developing the machine instruction.

o A program to implement the algorithm.

Page 20
Analysis and Design of Algorithms

o The clock speed of the system.

Since we are dealing with efficiency of the algorithm it should be


independent of all these factors.

 One possible approach is to count the number of times each of the


algorithm’s operation is executed.

 Identify the basic operation (the most important operation of algorithm)

 The basic operation contributes most to the running time of the algorithm.

 It is the most time consuming operation in the algorithm’s inner most


loop.

 Ex: Key comparison operation in Searching algorithm

 Therefore if we double the input size the execution time is 4 times


longer.

 The algorithm doesn’t consider the multiplication constants. It is


interested in only order of growth.

Orders of growth:
 By using small inputs we cannot distinguish the efficient
algorithm from the inefficient ones.

 For example in the case of Euclid’s algorithm the efficiency of the


algorithm becomes clear only in the case of the large input
difference.

Page 21
Analysis and Design of Algorithms

 The function growing the slowest among these is the logarithmic


function.

 Therefore we should expect a program implementing an algorithm


with logarithmic basic-operation.

 On the other end the exponential and the factorial function grows
fast.

Efficiencies.
There are many algorithm, for which running time depends not only on
input size but also with particular data items in the input.

Ex: sequential Search

ALGORITHM Sequential Search {A [0..n - 1], K)

//Searches for a given value in a given array by sequential search

//Input: An array A[0..n — 1] and a search key K

//Output: Returns the index of the first element of A that matches K or -1 if


there are no matching elements

i ←0

while i < n and A[i] ≠ K do

i← i+1

if i<n return i

else return -1

Worst-case efficiency:
 The worst-case efficiency of an algorithm is its efficiency for the worst
case input of size n, which is an input (or inputs) of size n for which the
algorithm runs the longest among all possible inputs of that size.

 We analyze the algorithm to see what kind of inputs yield the largest
value of basic operation’s count c(n) among all possible inputs of size n
and compute this worst case value.

Page 22
Analysis and Design of Algorithms

Cworst (n) = n.

 It guarantees that for any instance of size n, the running time will not
exceed Cworst (n) .

Best-case efficiency:
 The best-case efficiency of an algorithm is its efficiency for the
best case input of size n, which is an input (or inputs) of size n
for which the algorithm runs the fastest among all possible
inputs of that size.

For example In the case of sequential search , best case inputs


will be lists of size n with the first elements equal to search key;
accordingly,

 Cbest (n) = 1.

Note: The best case doesn’t mean the small input; it means the input of size n for
which the algorithm runs the fastest.

The analysis of the best-case efficiency is not nearly as important as that of the
worst-case efficiency.

Average case efficiency:


The average case efficiency of an algorithm is its efficiency for the
“random” or “typical” input of size n.

Cavg (n) = p (n+1)/2 +n (1-p).

Consider sequential search

Standard assumptions
The probability of a successful search is = p (0≤p≤1) and probability of first
match in ith position is same for every i i.e. p/n.

Where I is e number of key comparison

Probability of unsuccessful search. = (1-p).

Cavg(n)= [1. (p/n) +2(p/n)+…………………+ n(p/n)] + n(1-p)

=p/n[1+2+……..+n]+n[1-p]

=p/n n(n+1)/2 + n(1-p)

Page 23
Analysis and Design of Algorithms

= p(n+1)/2 + n(1-p)

This is quite reasonable.

If p=1 (successful search) the average number of key comparisons is

n+1/2 ≈ half of the list.

If p=0(unsuccessful search) then number of comparison is n.

Finding average case efficiency is difficult but for some algorithm it is


important.

Amortized efficiency:
It applies not to a single run of an algorithm but rather to a sequence
of operations performed on the same data structure.

It turns out that in some situations a single operation can be


expensive, but total for an entire sequence of n operation is always
significantly better than worst-case efficiency of that single operation
multiplied by n.

Three Asymptotic Notations:

O-notation:
A function t(n) is said to be in 0(g(n)), denoted t(n) € 0(g(n)), if
t(n) is bounded above by some constant multiple of g(n) for all
large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that

t(n) ≤ cg(n) for all n ≥ n0

Ex: 100n + 5 € 0(n2) with n0=5

100n + 5 ≤ 100n + n (for all n ≥ 5)

= 101n

≤ 101n2

So c=101 and n0=5

Thus for the same we can choose other value for constants c and n 0.

For example

Page 24
Analysis and Design of Algorithms

100n+5≤100n+5n (for all n≥1) =105n with c=105 and n 0=1.

Big-oh notation: t (n) € 0(g (n))

Thus we have:

O(1)< O(log n)<O(n)<O(n log n)< O(n2)< O(n3)< O(2n)<


O(n!)
Ω-notation:
A function t(n) is said to be in Ω(g(n)), denoted t(n) € Ω(g(n)), if t(n) is
bounded below by some positive constant multiple of g(n) for all large n, i.e.,
if there exist some positive constant c and some nonnegative integer n0
such that:

t(n) ≥ cg(n) for al n ≥ n0.

An example for this is to prove that n3€Ω (n2)

n3≥n2 for all n≥0.

i.e. we can select c=1 and n0=0.

Page 25
Analysis and Design of Algorithms

Θ-notation:
A function t(n) is said to be in Θ (g(n)), denoted t(n) € Θ(g(n), if t(n) is
bounded both above and below by some positive constant multiples of g(n)
for all large n, i.e., if there exist some positive constant c1 and c2 and some
nonnegative integer n0 such that

C2g(n) ≤ t(n) ≤ c1g(n) for all n ≥ n0.

Asymptotic growth rate


 A way of comparing functions that ignores constant factors and small
input sizes

 O (g(n)): class of functions f(n) that grow no faster than g(n).

 Θ (g(n)): class of functions f(n) that grow at same rate as g(n).

 Ω(g(n)): class of functions f(n) that grow at least as fast as g(n).

Page 26
Analysis and Design of Algorithms

Basic Asymptotic Efficiency Classes

Class Name Comments


1 constant Sort of best-case efficiencies, very few
reasonable examples can be given since an
algorithm's running time typically goes to
infinity when its input size grows infinitely
large.

log n logarithmi Typically, a result of cutting a problem's size by


c a constant factor on each iteration of the
algorithm. Note that a logarithmic algorithm
cannot take into account all its input (or even a
fixed fraction of it): any algorithm that does so
will have at least linear running time.

n linear Algorithms that scan a list of size n (e.g.,


sequential search) belong to this class.

n log n-log-n Many divide-and-conquer algorithms, including


n mergesort and quick sort in the average case,
fall into this category.

n2 quadratic Typically, characterizes efficiency of algorithms


with two embedded loops (see the next section).
Elementary sorting algorithms and certain
operations on n-by-n matrices are standard
examples.

n3 cubic Typically, characterizes efficiency of algorithms


with three embedded loops. Several nontrivial
algorithms from linear algebra fall into this
class.

2n exponenti Typical for algorithms that generate all subsets


al of an n-element set. Often, the term
"exponential" is used in a broader sense to
include this and faster orders of growth as well.

n! factorial Typical for algorithms that generate all


permutations of an n-element set.

Page 27
Analysis and Design of Algorithms

Useful property involving the asymptotic notations:


If t1(n) € O(g1(n)) and t2(n) € O(g2(n)) then t1(n)+ t2(n) € O(max{ g1(n), g2(n)})

Proof:

Since t1(n) € O(g1(n)) ,there exist some constant c 1 and some


nonneetive integer n1 such that

t1(n) ≤ c1 (g1(n)) for all n≥ n1.

Since t2 (n) € O(g2(n)),

t2(n) ≤c2 (g2(n)) for all n≥ n2

let us denote c3= max(c1, c2) so that we can use both inequalities . Adding
the two unequalities above yields the following.

t1(n)+ t2 (n) ≤ c1 (g1(n))+ c2 (g2(n))

≤ c3(g1(n) + c3(g2(n) =c3 2 max{ g1(n), g2(n)}

≤ 2(max { g1(n), g2(n)})

Hence t1(n)+ t2(n) € O (max { g1(n), g2(n)})

Using limits for comparing order of growth

Basic efficiency classes:


Classifying algorithm on their asymptotic notation have little practical value.( We neglect the
multiplicative constant.

1 log n n nlogn n2 n3 2n n!

Half of linea Merge 2 3 loop sets permutati


the size r sort loop on

Page 28
Analysis and Design of Algorithms

Quick
sort

Mathematical Analysis of a Nonrecursive Algorithm


1. Decide on a parameter(s) indicating an input's size.

2. Identify the algorithm's basic operation.

3. Check whether the number of times the basic operation is executed


depends only on the input size. If it also depends on some additional
property, determine the worst-case, average-case, and best-case
complexities separately.

4. Find out C(n) [the number of times the algorithm's basic operation is
executed.]

5. Using standard formulas establish the order of growth.

Ex: Linear Search Algorithm.

Algorithm MaxMin(A[0……n-1])

// Compares consecutive pairs of elements and the compares the larger one
with

// current maximum and the smaller one with the current minimum.

Page 29
Analysis and Design of Algorithms

// Input: An array A[0…..n-1] of n real numbers.

// Output: Gives minimum and maximum value in the array.

minval←0 maxval←0

for i←0 to n-2 do

if A[i]≤A[i+1]

if A[i]<minval

minval←A[i]

else

if A[i]>maxval

maxval←A[i]

Analysis:
No of inputs: n elements.

Basic operation: A[i]<A[i+1]

No of comparisons required:

At any time only one block either ‘if’ block or ‘else’ block will be executed.
Thus for

each iteration number of comparisons is 2. And the loop is executed n-1


times (loop is

executed only till the last but one element).

Therefore,

C (n) = (n-1) x 2

= 2n-2.

€ Θ(n)

Page 30
Analysis and Design of Algorithms

Element uniqueness problem:


ALGORITHM Distinctelements (A[0..n - 1])

//Checks whether all the elements in a given array are distinct

//Input: An array A [0..n - 1]

//Output: Returns "true" if A contains distinct elements, otherwise

//Returns "false“.

for i ← 0 to n — 2 do

for j ← i + 1 to n - 1 do

if A[i] = A [j]

return false

return true

Cworst(n) = n2.

Matrix multiplication:
Algorithm matrixmultiplication(A[0..n-1,0..n-1], B[0..n-1,0..n-1])

//multiplies two square matrices of order n by the definition based

//algorithm

//input: two n-by-n matrices A and B

//output: matrix C=AB

for i←0 to n-1 do

for j←0 to n-1 do

c[I,j] ←0.0

for k←0 to n-1 do

c[I,j] ←c[I,j]+A[I,k]*B[k,j]

return c

Page 31
Analysis and Design of Algorithms

The total number of multiplications made M(n) is expressed y the following


triple sum
n-1 n-1 n-1

∑ ∑ ∑ 1
I=0 j=0 k=0

n-1 n-1 n-1

∑ ∑ n =>∑ n2 =>n
3

I=0 j=0 I=0

Cworst(n) = n3.

Mathematical Analysis of a Recursive Algorithm


 Decide on a parameter (or parameters) indicating an input's size.

 Identify the algorithm's basic operation.

 Check whether the number of times the basic operation is executed


can vary on different inputs of the same size; if it can, the worst-case,
average-case, and best-case efficiencies must be investigated
separately.

 Set up a recurrence relation, with an appropriate initial condition, for


the number of times the basic operation is executed.

 Solve the recurrence or at least ascertain the order of growth of its


solution.

The factorial function F(n)=n!


 The factorial function F(n)=n! for an arbitrary non-negative integer.

 n!= 1.…(n-1).n = (n-1)!.n for n≥1.

Page 32
Analysis and Design of Algorithms

ALGORITHM F(n)
//Computes n! recursively.

//Input: A non-negative integer.

//Output: The value of n!

If n=0 return 1

else return F(n-1) * n

Recurrence relations

Defn : A recurrence relation for the sequence a 0 a1…………. an is an equation


that relates an to some of its previous terms a 0 a1…………. an-1 e have a recursive
formula.

Finding an explicit formula for the sequence defined by a recurrence relation


is backtracking or substitution method.

History of Fibonacci:
 It s introduced by Leonardo Fibonacci in 1202 as a solution to the
problem of a rabbit population.

 This technique is used in predicting prices of stocks and commodities.

 In the field of computer science worst-case input happen to be


consecutive elements of the Fibonacci sequence.

Algorithms for computing Fibonacci numbers


In this case the sequence goes so fast that it is the time efficient
method for computing them that should be of primary task.

Algorithm F(n)

//computes the nth Fibonacci number recursively by using is definition.

//input: A nonnegative integer n.

Page 33
Analysis and Design of Algorithms

//output: the nth Fibonacci number.

If n≤1 return 1

Else return F(n-1)+F(n-2)

Analysis:

Input size: n(the number)

Basic operation : addition.

Let A(n) be the number of additions needed for computing F(n-1) and F(n-2)
are A(n-1) and A(n-2) and one more addition to compute their sum.

Thus we et the recurrence relation for A(n) :

A(n)=A(n-1)+A(n-2)+1 for n>1

A(0)=0, A(1)=1.

The recurrence A(n)-A(n-1)+A(n-2)=1 is quite similar to the (1) we solved


but its right hand side is not equal to zero, such recurrences are called
inhomogeneous recurrences.

We can reduce our inhomogeneous recurrences as homogeneous by


rewriting as

[A(n)+1]-[A(n-1)+1]-[A(n-2)+1]=0

And substituting B(n)=A(n)+1;

B(n)-B(n-1)-B(n-2)=0 for n>1

B(0)=0, B(1)=1.

Now we can solve the recurrrene in the same way.

But we can observe that B(n) is the recurrence as F(n) except that it
starts with 2 ones.

So we can say that

B(n)=F(n+1)

And A(n)= B(n)-1

Page 34
Analysis and Design of Algorithms

= F(n+1)-1

n+1 n+1

 Fn = 1 (1+√5) - 1 1-√5 -1

√5 2 √5 2

Still it is exponential function.

As it is exponential function it is suitable only for small input size.

We can obtain a faster algorithm by simply computing the successive


elements of Fibonacci sequence iteratively.

F(4)

F(3) F(2)

F(2) F(1) F(1) F(0)

F(1) F(0)

Tre of recursive calls for computing the Fibonacci number for n=5.

Algorithm Fib(n)

//computes the nth Fibonacci number iteratively by using its definition.

//input: A nonnegative integer n

//output: The nth Fibonacci number

F[0]←0; F[1]←1

For i←2 to n do

Page 35
Analysis and Design of Algorithms

F[i]←F[i-1]+F[i-2]

Return F[n]

Analysis
Input size:n

Basic operation : Addition.

∑ 1 = n-2+1= n-1 € Θ(n).


i=2

Questions
1) Write an Algorithm for sequential search. Discuss Worst case, best
case and average case efficiencies.
2) Discuss the three Asymptotic Notation:
3) List the Basic Asymptotic Efficiency Classes
4) With example discuss the Mathematical Analysis of a Nonrecursive
Algorithm:
5) With example discuss the Mathematical Analysis of a Recursive
Algorithm:

Page 36
Analysis and Design of Algorithms

Brute Force
Brute force method:
 It is a straight forward approach to solving a problem usually directly
based on the problems statement and definitions of the concepts
involved.

Ex:

1. Computing an

2. Computing n!

3. Sequential search

Approaches in Brute force Technique


1) Unlike some other strategies, brute force is applicable to a very
wide variety of problems.

It is used for many elementary but important algorithmic tasks such


as computing sum of n numbers, finding largest element in a list and
so on.

2) For some important problems the brute force approach yields


reasonable algorithms of at least some practical value with no
limitation on instant size.

Page 37
Analysis and Design of Algorithms

3) The expense of designing a more efficient algorithm may be


unjustifiable if only a few instances of problem need to be solved
and a brute force algorithm can solve those instances with
acceptable speed.

4) Even if too inefficient in general, a brute-force algorithm can still be


useful for solving small-size instances of a problem.

Finally, a brute-force algorithm can serve an important theoretical or


educational purpose, ex: as a yardstick with which to judge more
efficient alternatives for solving a problem.

Selection Sort
1) We start by scanning the entire list to find the smallest element and
exchange with the first element.

2) We put the smallest element in the final position of the sorted list .i.e.
a[0]

3) Then we scan the list starting with the second element, to find the
smallest among the n-1 elements and exchange it with the second
element.

4) We put the second smallest element in its final position .i.e. a[1].

5) On the ith pass which we number from 0 to n-2, the algorithm searches
for the smallest item among the last n-i elements and swaps it with a i .

Algorithm SelectionSort(A[0….n-1])

// Sorts given array using selection sort.

// Input: An array A[0…..n-1] orderable elements.

// Output: An array A[0…..n-1] sorted in ascending order.

for i←0 to n-2 do

min←i

Page 38
Analysis and Design of Algorithms

for j←i+1 to n-1 do

if A[j]<A[min]

min←j

swap A[i] and A[min]

Analysis:

Ex: To sort 10 5 2 0 4
Index 0 1 2 3 4
10 5 2 0 4 i=0
10 5 2 0 4 min 0 j 1
10 5 2 0 4 min 1 j 2
10 5 2 0 4 min 2 j 3
10 5 2 0 4 min 3 j 4 (swap
as 4>0)

00 5 2 10 4 i=1 min 1 j 2
0 5 2 10 4 i=1 min 2 j 3(swap as 10
>2)

Page 39
Analysis and Design of Algorithms

0 2 5 10 4 i=2 min 2 j 3
0 2 5 10 4 i=2 min 2 j
4(swap as 5>4)
0 2 4 10 5 i=3 min 3 j
4(swap as 10>5)

0 2 4 5 10
Bubble Sort
1) In this technique the two successive items a[j] and a[j+1] exchanged
whenever a[j]>a[j+1].

2) By doing it repeatedly we end up “bubbling up” the largest element to


the last position on the list.

3) The next pass bubbles up the second largest element, and so on until,
after n-1 passes, the list is sorted.

ALGORITHM BubbleSort(A[0…n-1])

//Sorts the array using bubble sort.

//Input: An array A[0….n-1] of orderable elements.

//Output: An Array A[0….n-1] in ascending order.

for i←0 to n-2 do

for j←0 to n-2-i do

if A[j+1]<A[j]

swap A[j+1] and A[j]

Ex: Consider 40, 50,30,20,10


A[0]=40 40 40 40 40 30 30 30 20 20 10

Page 40
Analysis and Design of Algorithms

A[1]=50 50 30 30 30 40 20 20 30 10 20
A[2]=30 30 50 20 20 20 40 10 10 30 30
A[3]=20 20 20 50 10 10 10 40 40 40 40
A[4]=10 10 10 10 50 50 50 50 50 50 50

First pass second pass Third pass forth pass

The number of key comparisons for the bubble sort is same for all array of
size n.
n-2 n-2-i n-2

∑ ∑ 1 = ∑[(n-2-i)-0+1]
i=0 j=0 i=0

n-2

∑ (n-1-i) = (n-1)n € Θ(n2)


i=0 2
The version of algorithm can be improved by exploiting the following
observations.

If a pass through the list makes no changes, the list has been sorted
and we can stop the algorithm.

Though the new version runs faster on some inputs. It is still Θ (n2) in
the worst and average cases. In the best case it is n.

ALGORITHM BubbleSort(A[0…n-1])

//Sorts the array using bubble sort.

//Input: An array A[0….n-1] of orderable elements.

//Output: An Array A[0….n-1] in ascending order.

Page 41
Analysis and Design of Algorithms

for i←0 to n-2 do

count 0

for j←0 to n-2-i do

swap A[j+1] and A[j]

count count+1

if count 0

return

Sequential Search
The algorithm simply compares successive elements of a given list
with a given search key until either a match is encountered(successful
search) or the list is exhausted without finding a match(unsuccessful search).

A simple extra trick is often employed in implementing sequential


search

If we append the search key to the end of the list, the search for the
key will have to be successful, and therefore we can eliminate a check for
the list’s end on each iteration of the algorithm.

ALGORITHM Sequential Search(A[0….n-1],k)

// Searches the array using Sequential Search method.

//Input: An array A[0….n-1] of elements and a key element k which is to


be //searched.

//Output: if found, returns the position where the element found else returns
-1.

A[n]← k

i=0;

while A[i]≠ k do

i=i+1

if i< n return I

Page 42
Analysis and Design of Algorithms

else return -1

Another improvement can be made by taking the sorted list

Searching in such a list can be stopped as soon as an element greater


than or equal to the search key is encountered.

Matrix Multiplication
//Multiplication of 2 nxn matrices

//Input :Matrices A and B.

//Output: C=A*B

for i←0 to n-1 do

for j←0 to n-1 do

C[i,j] ←0

for k←0 to n-1 do

C[i,j] ← C[i,j] + A[i,k] * B[k,j]

return C.

String Matching
Here we have given a string of n characters called the text and a
string of m characters(m≤n) called pattern, find a substring of the text
that matches the pattern.

ALGORITHM BruteForceStringMatching (T[0…n-1],p[0…m-1])

//Implements String matching

//Input: text array T of n characters, and pattern array p of m characters.

//Output: Position of first character of pattern if successful otherwise -1

for i←0 to n-m do

j←0

Page 43
Analysis and Design of Algorithms

while j<m and P[j]=T[i+j]

j←j+1

if j=m

return i

return -1

Solving problem
1. Align the pattern against the first m characters of the text and start
matching the corresponding pairs of characters from left to right until
either all the m pairs of the characters match( then the algorithm can
stop) or a mismatching pair is encountered.

2. In the next case the pattern is shifted one position to the right and
character comparisons are resumed.

3. Starting again with the first character of the pattern and its
counterpart in the text.

4. We do the comparison up to n-m position beyond that position there


are not enough characters that match the entire pattern.

Tracing of String Matching

Page 44
Analysis and Design of Algorithms

In this example we can see that the algorithm shifts the pattern almost
always after a single character comparison

He worst case may be when the algorithm may have to make all m
comparison before shifting the pattern, and this can happen for each of the
n-m+1 tries.

Then in the worst case the efficiency is Θ(nm).


n-m m n-m

∑ ∑ 1 = ∑m-0+1
i=0 j=0 i=0

n-m n-m

∑m + ∑ 1
i=0 i=0

=>m(n-m+1)+(n-m+1)

Page 45
Analysis and Design of Algorithms

=>(m+1) (n-m+1)

=> m2-mn-n-1

=> Cwost= Θ(nm).

In case of searching in random text it has been shown to be


linear. i.e. Θ(n+m)= Θ(n).

Questions
1.. Explain brute force method. Write an algorithm to sort an array using
brute force method. Analyze its efficiency.

2.. Discuss travelling salesman problem and Knapsack Problem using


exhaustive each technique.

Page 46

You might also like