0% found this document useful (0 votes)
15 views46 pages

Daa 1

The document provides an in-depth overview of algorithms, including definitions, properties, and analysis methods. It covers algorithm design techniques, validation, performance analysis, and the distinction between time and space complexity. Additionally, it introduces asymptotic notations for evaluating algorithm efficiency and discusses debugging and profiling in program testing.

Uploaded by

ammmir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views46 pages

Daa 1

The document provides an in-depth overview of algorithms, including definitions, properties, and analysis methods. It covers algorithm design techniques, validation, performance analysis, and the distinction between time and space complexity. Additionally, it introduces asymptotic notations for evaluating algorithm efficiency and discusses debugging and profiling in program testing.

Uploaded by

ammmir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit –I

Introduction:- What is an algorithm, Algorithm Specification,


Performance Analysis, Randomized Algorithms
Elementary Data Structures: - Stacks and Queues, Trees,
Dictionaries, Priority Queues, Sets and Disjoint Set Union, Graphs.

Introduction
Q) Define an Algorithm. What are the properties of algorithms?
Ans
Algorithm :(Defn.):-

An algorithm is a finite set of instructions which when followed accomplishes


a particular task.

Characteristics of an algorithm(properties):-

 Input:- Zero or more quantities are to be applied as inputs. An algorithm has


zero or more input quantities that are given to it initially before the algorithm
begins or dynamically as the algorithm runs. These inputs are taken from
specified set of objects. These inputs can also be applied externally to the
algorithm.

 Output:- At least one quantity must be produced as output. An algorithm


has one or more output quantities that have a specified relation with the
inputs.

 Definiteness:- Each instruction must be clear and there is no ambiguity.


Each operation specified must have a definite meaning that it must be
perfectly clear. Each step of an algorithm precisely defined. The actions to be
carried out must be rigorously and unambiguously specified for each case.
Ex:- “compute 10/0” and “add 2 or 6 to a” are not definite.

 Finiteness:- An algorithm must always terminate after a finite no. of steps.If


we trace and the instructions of the algorithm then for all cases of inputs the
algorithm must terminate after a finite no. of steps.

 Effectiveness:- Each operation should be effective. i.e., the operation must


be able to carry out in a finite amount of time. An algorithm is generally
expected to be effective in the sense that its operations must all be
sufficiently basic, that they can in principle be done exactly and in a finite
length of time by someone using pencil and paper.

Q) What do you mean by algorithm analysis?


Q) Write about debugging and profiling?
Ans:-
Study of Algorithms:-

In the study of algorithms there are four distinct areas –

How to design algorithms?


How to validate algorithms?
How to analyze algorithms?
How to test the programs?

Designing Algorithms:-

Creating an algorithm is an art which may never be fully automated. There


are different types of designing techniques depending upon the task to be
performed. The various designing techniques available are

1. Greedy Method:- It is useful to solve problems with ‘n’ inputs.


2. Divide & Conquer:- Use full to solve problems with independent sub
problems.
3. Dynamic Programming:- Use full to solve problems with dependent sub
problems.
4. Back Tracking: - It is used to obtain the solution by imposing implicit &
explicit constraints on the entire solution set.
5. Branch & Bound: - The back tracking algorithm is effective for decision
Problems but it is not designed for optimization Problems. This drawback is
rectified in branch and bound technique.

Validation of algorithms:-

Verification means are we doing the process right


Validation means are we doing the right process.
Once an algorithm is designed it is necessary to show that it computes
correct answers for all possible legal inputs. This process is known as validation of
algorithms. At this stage there is no need to express the algorithm as a program.
Purpose of validations is to ensure that the algorithm will work correctly
independent of Programming Languages.

Analyzing Algorithms:-

As an Algorithm is executed it uses resources like processor and memory.


Analysis of algorithms is used for the performance evaluation of algorithms. i.e., the
time and space complexities must be determined.
Time complexity refers to the amount of processor time required to
execute a program.
Space complexity refers to the amount of memory occupied by the
program.
Testing Programs:-

Testing of Programs consist of two Phases-


1. Debugging
2. Profiling
Debugging is a process of executing a program on sample data to determine
whether faulty results will occur and if at all any errors occur how to correct them.
Debugging is concerned with conducting tests to uncover errors and ensure that
the defined input will produce the actual results that agree with the desired results.
Debugging can only point to presence of errors, but not to their absence.
Debugging is not testing but always occurs as a consequence of testing. Debugging
begins with the execution of a test case. The debugging process attempts to match
symptom with cause, thereby leading to error correction. Debugging has two
outcomes- Either the error is detected and corrected or the error is not found.

Profiling is the process of executing a correct program on data sets with real
time data and measuring the time and space complexities. This is also called as
performance profile. These timing figures are useful as they may confirm and point
out logical places to perform useful optimization. Profiling is done on programs that
are devised, coded, proved correct and debugged on a computer.

Q) Define Pseudo Program. Give the general procedure for writing a


pseudocode.
Ans:-
Computational Procedures/Pseudo Programs:-
Algorithms which are finite and effective are also called as Computational
procedures and are also treated as Pseudo Programs.

Algorithm Specification:-
PseudoCode Conventions:-

The manner in which we describe an algorithm is specified as ‘Algorithm


Specification’. Algorithms are generally specified in a natural language such as
English called as Pseudo code. Given below are some of the conventions to be
followed while writing a Pseudo code Program.

1. Comments begin with // and continue until the end of line.


2. Blocks are indicated with matching braces {and}. In general statement blocks
and body of procedures are enclosed within braces.
3. Statements are delimited by semicolon (;).
4. An identifier begins with a letter. The data types of variables are not explicitly
declared. The types will be clear from the context of usage.
5. Simple data types such as integer, float, char, Boolean can be used. In order
to form a compound data type records are used. Records are written as given
below:

Node=record
{
Datatype-1 data1;

Data type –n datan;


Node *link;
}
The individual data elements are accessed using ->
6. Assignment of values to variables is done using the assignment statement
(variable):=(Expression);
7. There are two Boolean values true and false. Logical operators supported are
‘AND’, ‘OR’, ‘NOT’. Relational Operators Provided are <, ≤, >, ≥, =, ≠ .
8. Array indices start at Zero. The elements of multidimensional arrays are
accessed using square braces i.e.,[ and ]. If A is a two dimensional array then
the (i, j)th element is denoted as A[i, j].
9. There are 3 looping constructs available.
a. While loop:-
While (condition) do
{
<statement 1> ……
<statement n>
}
b. For Loop:-
For variable:=value1 to value2 step step do
{
<statement 1> ……
<statement n>
}
c. Repeat until
repeat
<statement 1> ……
<statement n>
until(condition)
2. There are 3 conditional checking Statements.
a. If(condition) then (statement)
b. If(condition) then (statement 1) else (statement 2)
c. Case
{
:(condition 1): (statement 1) …..
:(condition n): (statement n)
:else : (statement n+1)
}

3. Read is used as input statement and Write as Output statement.


4. An Algorithm is the only available procedure that consists of Heading and Body.
5. A Heading is of the form
a. Algorithm name(<Parameter List>)
6. The body may consist of one or more statements enclosed within braces.

Q) What are priori and posterior analysis of algorithms? Explain.


Q) Distinguish between a priori analysis and a posteriori testing.
Ans:-
Performance Evaluation:-

The basic criterion on which an algorithm can be evaluated is

1. Does it do what we want it to do?


2. Does it work correctly according to the original specifications of the task?
3. Is there documentation that describes how to use it and how it works?
4. Are procedures created in such a way that they perform logical sub
functions?
5. Is the code readable?
Performance evaluation is generally classified into two major phases –

1. Performance analysis(Priori estimates) and


2. Performance measurement(Posteriori testing)

Priori analysis is basically machine and programming language


independent. In this analysis we basically determine the order of magnitude/
frequency count of the steps/ statements. This can be determined directly from the
algorithm, independent of the machine on which it is executed an dthe
programming language in which it is written.

Example:
A: For i:=1 to n step 1 do
X := n+y;

B: for i:=1 to n do
For j:=1 to n do
X := x+y;

In program segment ‘A’ the step count is one and the frequency count
is ‘n’. similarly for program segment ‘B’ step count is one and frequency count is
(n*n)=n2.

Posteriori testing is concerned with collecting the actual statistics


about the algorithms consumption of time and space while it is executing. Therefore
it is machine dependent and programming language dependent.

Q) What is the time complexity of an algorithm? Explain with an example.

Q) Differentiate between space complexity and time complexity?

Performance Analysis:-

During the performance analysis the following concepts will be considered.


1. Memory and time requirements of a problem.
2. Asymptotic Notations(O, o,W,w, q).
3. Measuring the actual runtime of a problem by using the clocking
functions.

The space complexity of the program is the amount of free memory the
program needs to run to completion. The space complexity of a program can be
used to make decisions about memory such as whether sufficient memory is
available to run the program or not. If the system is a multiprocessing system then
the space complexity becomes an important aspect.
The time complexity of the program is the amount of time needed by the
program to run to completion. The time complexity of the program is calculated for
the following reasons.

1. Some computers require users to provide an upper limit on the amount of


processing time.
2. To select an alternative solution to the same problem.
3. The program we are developing might need to provide a satisfactory real
time response.

Space Complexity:-
The memory requirements for most of the programs will be as follows –
Instruction Memory/Space
Data Memory/Space
Environmental Stack

Instruction Memory:- It is the space needed to store the compiled version of the
program instruction. This depends on factors such as compilers, computer options
etc. Nature of this space is static.
Data Memory:- It is the memory needed to store all the constants and variable
values. It also includes the memory allocated for dynamic memory allocation
process. The amount of space required for a structure variable can be obtained by
adding the space requirements of all its components.

Ex:- struct stack


{
int a[20];
int top;
}; struct stack s;
Memory occupied by s would be 20*2+2 =42 bytes.
Data space can be organized as either static or dynamic.

Environmental Stack:- It is used to save the information needed to resume the


execution of partially completed functions. It completely stores and deals with
functions and temporary results. Each time a function is invoked, the following data
is stored in the environment stack.

1. Return address of next instruction.


2. Values of local variables and formal parameters.
3. While dealing with recursive functions entire middle products (temporary
results) are stored in the environmental stack.
Based on the above facts we can conclude that the space requirement is the
sum of the following components – A fixed part and a variable part.
The space requirement S(P) of any algorithm P is represented as
S (P) = c + Sp
Where c is a constant and Sp is the variable part.

Time Complexity:-

The time complexity of a program mainly depends upon speed of the system
and the number of instructions present in the program. It also depends on the
length of each program instruction.
Let T(P) be the time taken for program ‘P’. It will be both compilation time and
runtime. But the compilation of a program is needed only at the beginning of the
program. Once the program is compiled there is no need to recompile the program
unless any changes are made to the program. So, after the first time generally, the
program is executed without compilation. Hence for the first time

T (P) = c(P) + r(P)

From 2nd time onwards T (P) = r(P)


r(P) is the runtime for the program.

It depends upon no. of factors and all of those are to be considered before
calculating the runtime. For this purpose two techniques are adopted.
A Program step is defined as a syntactically or semantically meaningful
segment of a program that has an execution time independent of the instance
characteristics.

1. Operation counts
2. Step counts

Operation Counts:-

The time complexity of a program can be calculated by using counts for the
same type of operations like no. of additions, subtractions, multiplications etc.,
Step Counts:-

The drawback in operation counts method is that the time complexity of only
the selected operations is considered and the time spent on other instructions and
operations is omitted. In step count method we can calculate the time spent on all
parts of the program. The step count is the function of instance characteristics
(variables). So during this process we have to calculate counts per variable. After
relevant instance characteristics have been selected we can define a step. A Step
is any computation unit that is independent of the characteristics. When analyzing a
recursive program step counts are calculated using recursive formulas.

When the chosen parameters are not sufficient to determine the step counts
then we make use of three kinds of step counts.
1. The best case step count is the minimum no. of steps that can be executed
for the given parameters.
2. The worst case step count is the maximum no. of steps that can be executed
for the given parameters.
3. The average step count is the average no. of steps executed on instances
with the given parameters.

Q) What is a priori analysis? Explain asymptotic notations used for


determining the timing complexities of the algorithm.
Q) Explain the mathematical notation and definitions used for analyzing
algorithms.
Q) Explain the asymptotic notation used to analyze the algorithms.

Q) What is asymptotic notation? Briefly explain how timing complexity of


an algorithm is derived at using priori analysis.
Q) Define the following asymptotic notations. {i} Big ‘oh’ {ii} Omega {iii}
Theta
Q) Define the Big –O notation used for expressing the complexity of an
algorithm and briefly explain its properties.

Asymptotic Notations:-

Important reasons for determining the operation and step counts are
1. To compare the time complexities.
2. To predict the growth in runtime by changing inputs.
Neither of these counts yield to an accurate measure of time complexity
because when we use operation counts, we focus only on certain key operations
and ignore all of the other operations. Similarly while using step counts we only
consider certain variables and ignore other aspects of the program.
There is a notation that will enable us to make meaningful statements about
time and space complexities. These notations are called as Asymptotic Notations.
These notations are used to describe the behavior of the time and space
complexities.
The most commonly used notations are O(Big “oh”), o(Small
”oh”),W,w(Omega), q(Theta).

O(Big “oh”):- This notation provides an upper bound for the functions. It can be
defined as
The function f(n)=O(g(n)) (read as “f of n is big oh of g of n”) if there exist
positive constants c and n0 such that f(n)≤c*g(n) for all n, n≥ n 0.

f(n) is the function of time and space complexities and g(n) is the function of step
counts or operation counts. Some of the most commonly used notations are –

O(1) – When the computing time is a constant.


O(n) – When the no. of comparisons increases linearly with the no. of inputs.
O (n2) – Quadratic
O (n3) - Cubic
O (2n) – Exponential
O (log n) – Logarithmic
O (n!) – Factorial

W (Omega):- This notation is used to provide lower bound for the functions. It can
be defined as –
The function f(n)= W(g(n)) (read as “f of n is omega of g of n”) if there exists a
positive constant c and n0 such that f(n)≥c*g(n) for all n, n≥ n 0.

q(Theta):-
This notation is used when the function ‘f’ is bounded by upper and lower limits.
The function is defined as –
The function f(n)= q(g(n)) (read as “f of n is theta of g of n”) if there exist
positive constants c1,c2, and n0 such that c1g(n) ≤ f(n)≤ c2 g(n) for all n, n≥ n0.

O(small oh):-
w(Little omega):- Left for the purpose of student’s study.

Q) What is a randomized algorithm? Classify the randomized algorithms.


Randomized Algorithms:-
An algorithm that uses a randomizer (like a random number generator) is
called a randomized algorithm. The output of a randomizer is used in making
decisions in the algorithm. For the same input, the output of a randomizer and the
execution time of the randomized algorithm differ from run to run.
Randomized algorithms can be classified in to two categories:
1. Las vegas algorithms
2. Monte carlo algorithms
Las vegas algorithms:- These algorithms always produce the same (correct)
output for the same input. The execution time of these algorithms depends upon
the randomizer output and is characterized as a random variable.
Monte Carlo Algorithms:- These algorithms produce different outputs for the
same input. In this type of algorithms there are more chances of getting incorrect
answers which are not desirable. However for the fixed input there are less
variations in the execution time when compared to Las vegas algorithms.

Q) What are the advantages and disadvantages of randomized algorithms?


Advantages:-
Simple and efficient to use.
Yields better complexity bounds.
Dis Advantages:-
Probable to fail.
Does not give good performance due to the use of randomizer in an
algorithm.

Elementary Data Structures:-


Q) What is a stack?

Stack:-

Stacks are used to implement LIFO mechanisms. Insertions and deletions can
be done only from one end called the top. To implement a stack we need an array
and a variable top.
top
4
3
2
1
0

When the stack is empty top will be < 0.


When the stack is full top will be ≥ size-1.
The two operations that can be performed on stack are push and pop.
Q) Write an algorithm for inserting and deleting elements into a stack.
Push Operation:

Push operation is performed by performing the following steps.


1. Check for stack overflow.
2. Increment top
3. Insert the item.
Design:-
Algorithm Push(Item)
//Push is used to insert an element into stack.
//n is the size of stack and item is the element to be inserted into stack.
//algorithm returns true if successful, else returns false.

{
if (top ≥ n-1) then
{
write (“ stack is full”);
return false;
}
else
{
top:=top+1;
stack[top]:=item;
return true;
}
}

Analysis:-
The time taken for inserting an element into the stack will be constant
because the same set of instructions (top=top+1 and stack [top] =item) are
executed every time we insert an element. Also the memory requirement for each
and every element would be the same. Therefore the complexity of push
operation can be given as O(1).

(calculation of space complexity given for students as a workout)

Pop Operation:-

Pop operation is performed by performing the following steps.


1. Check for stack underflow.
2. Delete the topmost element by copying it into item.
3. Decrement top.
Design:-
Algorithm Pop(Item)
//Pop is used to delete an element from stack.
//n is the size of stack
//item is used as output to hold the element deleted from stack.
//algorithm returns true if successful, else returns false.

{
if (top < 0) then
{
write (“ stack is Empty”);
return false;
}
else
{
item:=stack[top];
top:=top-1;
return true;
}
}
Analysis:-
The time taken for deleting an element from the stack will be constant
because the same set of instructions is executed every time we delete an
element. Also the memory requirement for each and every element would be the
same. Therefore the complexity of pop operation can be given as O(1).

(Calculation of space complexity pending)

Q) What is a Queue?
Queues:-

Queue is a FIFO structure in which insertions are done from one end (Rear)
and deletions from the other end (Front). An array is taken to implement a Queue
and two integers to represent Front and rear.
When the queue is empty, F<0 and R<0.
When the Queue is full, R ≥ size-1.
Both insertion and deletion can be done on Queues.

Q) Write algorithms for inserting and deleting elements into Queue.


Insert Operation:-

Insert operation is performed by performing the following steps.


1. Check for Queue overflow.
2. Increment Rear.
3. insert item into queue at rear.

Design:-
Algorithm QInsert (Q, Front, Rear, n, Item)
//Insert is used to insert an element into Queue.
//n is the size of Queue
//item is used as input to hold the element to be inserted into queue.
//Front and Rear have been set to -1 prior to the first invocation.
//algorithm returns true if successful, else returns false.
{
if ((Rear < 0)AND(Front=Rear)) then
{
write (“ Queue is Full”);
return false;
}
{

}
}
Analysis:-

( calculation of space complexity)

Q) Write an algorithm to insert an element into a circular queue.


Circular Queue: - The drawback in linear queue is even though there are empty
locations at the front end, when r=size -1 insertion into the queue is not possible.
This drawback can be overcome by using circular. Queue in which the last and first
positions are assumed to be adjacent to each other.
0 1 2 3 4 5
Queue
3 40 50 6
0 0

f r
r 0

5 1
60

30
50
4 2
40
f
3
If we want to insert an element now we have to reinitialize r=o and insert the
element at Q(r), we insert 60, 70 linearly it looks like.

0 1 2 3 4 5
3 6
60 70 40 50
0 0

r f
(-1+1) % size
0 % size
0%6
0=f

(r+1)% size =f
(r+1)% size =f will be equal to f in two cases
(i) When circular queue is full
(ii) When circular queue is empty
* If (((r+1) % size ==f) && (r!=-1)) then circular queue is full.
If ((( r+1)% size ==f) && (r=-1)) Circular queue is empty

Circular queue also follows FIFO and we need an array and two variables. The
operations which can be performed on circular queue are insertions and deletions.

Insertion: - If (r+1)%size == f and r -1 queue is full and insertion is not possible


else if r==size-1 reinitialize r=0 and insert the element at Q (r) else increment ‘r’ by
1 and insert the element at Q (r).

Deletion: - If (r+1)% size ==f and r==-1 or f==0 and r==1 then queue is empty,
deletion not possible.
Else if, f==size-1 reinitialize f=0 to delete the last element. Else if f==r queue
contains only one element reinitialize f==0 & r==-1 to delete that element else,
increment ‘f’ by 1.

Design:-
Algorithm insert (item)
//insert an item into queue
//return five if successful, else return false
//item is used as input
{ if (((r+1) mod SIZE) ==f and (r1=-10 then
{ write (“circular queue is full”)
return false;
}
else
if (r==size-1)
{r: = 0;
Queue [r] = item;
return true;
}
else
{r:=r+1
queue [r] = item;
return true;
}
}
Analysis:- Since the time and memory requirement for true insertion of an element
in to queue is constant. The time and space complexities for the insert algorithm
will be O(1).

Design:-
Algorithm Delete (item)
//delete an element from Queue
//return true if successful, else return false
//item is used as output
{ if((r+1)mod size)==f and (r==1) then
{ write (“circular queue is empty”)
return false;
}
else
if (f==r) then
{ item = queue(f);
f=0;
r=1;
return true;
}
else
if (f==size-1) then
{ item = queue(f);
f=0;
return true;
}
else
{ item = queue(f);
f:=f+1;
return true;
}
}
Analysis:- Since the time requirements for the deletion of an element is constant.
The time complexity for the delete algorithm will be 0(1)

Q) Show how to represent a Dequeue in a one dimensional array and write


algorithms which insert and delete at either end.
Ans:- Left for student exercise.

Trees and their terminology:-

1. Tree:- A tree is defined as a nonlinear container that models in a hierarchical


relationship in which all but one element has a unique predecessor(parent)
but may have many successors(children). The unique parentless element is
called the “root” of the tree. (or) A tree ‘T’ is a finite nonempty set of
elements. One of these elements is called the “root”, and the remaining
elements (if any) are partitioned into trees which are called the sub trees of
‘T’.
2. Node:- The elements of a tree are called nodes. Every node has a unique
path connecting it to the root of the tree.
3. Path:- Path is a sequence of adjacent elements.
4. Length of a path:- The length of a path is the number of its adjacent
connections, which is one less than the number of nodes that it connects. In
the tree shown below, the path(M,H,C,A) connecting node M to root node A
has length 3.

B C D

E F G H

I J K L M N O P

5. Depth of a node:- The depth of a node is the length of its path up to the root.
In the tree shown above, node E has depth 2 and node M has depth 3. The root
itself has depth 0.
6. Level of a tree:- The level of a tree means all the nodes at a given depth. In
the tree with root A shown above, level 2 consists of the set of node {E, F, G, H}.
7. Height of a tree:- The height of a tree is the greatest depth among all its
nodes. The tree shown above has height 3.
8. Singleton Tree:- The tree whose root is its only node is called a singleton tree.
The height of a singleton tree is Zero.
9. Empty tree:- The tree with zero nodes is called the empty tree. The height of
an empty tree is defined to be -1.
10. Degree of a node:- The degree of a node is the number of its children. In the
above example, B has degree 1, D has degree 0, and H has degree 5.
11. Degree of a tree:- The degree of a tree is the maximum of its element
degree.
12. Ancestor:- For each node x, let P(x) denote the path from x to the root of the
tree. For example in the tree shown here, P(M)=(M,H,C,A). Except for x itself, the
nodes in P(x) are called the ancestors of x. For example, H, C, and A are the
ancestors of M. The root of a tree is the ancestor of all other nodes, and it is the
only node that has no ancestors.
13. Descendant:- We say that x is a descendant of y if y is an ancestor of x. In
this example, M is a descendant of C, so are F, G, H, K, L, N, O and P. All nodes
except the root itself are descendants of the root node.
Q) Briefly explain about Binary Trees.
Q) What is a Binary Tree?
Binary Tree:-
A binary tree T is defined as a finite set of elements, called nodes such that
T is empty(called the null tree or empty tree) or
T contains a distinguished node R, called the root of T, and the
remaining nodes of T form an ordered pair of disjoint binary trees T 1 and
T2.
(or)
A Binary tree is a finite (possibly empty) collection of elements. When the
binary tree is not empty, it has a root element and the remaining element(if any)
are partitioned into two binary trees, which are called the left and right sub trees
of T.

Q) Distinguish between Binary tree and Tree?

Difference between a Binary tree and a Tree:-

Binary Tree Tree

A Binary Tree can be Empty. A Tree cannot be empty

Each element in a binary tree has Each element in a tree can have any
exactly Two sub trees number of sub trees.
The sub trees of each element in a
binary tree are ordered as left and The sub trees in a tree are not ordered.
right sub trees.

Q) Define full binary tree and complete binary tree?

Full Binary Tree:-


A binary tree of height ‘h’ that contains exactly 2 h-1 elements is called a
full binary tree. i.e., A binary tree is called a full binary tree if there are maximum
no. of nodes in the last level also.
Complete Binary Tree:-
A binary tree T is called complete if each node of T can have at the most
two children. A binary tree T at level L can have at the most 2 L nodes.

Q) Discuss about formula based (array) representation of binary trees.


Discuss where this representation is useful and where it is convenient?
Q) Explain about linked representation of a binary tree?
Ans:- Notes given in class.

Q) Define Binary Search Tree? Explain the procedure for constructing a


binary search Tree?
Ans:-
Binary Search Tree: -
A binary search tree is a binary tree it may be empty. If it is not empty it satisfies
the following properties.
(1) Every element has a key and no two elements have the same key
(2) The keys in the left sub-tree are smaller than the key in the root
(3) The keys in the right sub-tree are larger than the key in the root
(4) The left and right sub-trees are also binary search trees
CONSTRUCT A BINARY SEARCH TREE FOR THE FOLLOWING KEYS: -
3, 7, 11, 1, 25, 15, 27, 35, 20, 43

1 7

1
1

2
5

1 2
5 7

3
2 5
0

4
3

LEFT SIZE OF A NODE: -

In a binary search tree it is defined as the height of the left sub-tree of the node + 1
3
Ex: - 2
5
2 1
1 2
5 7

1
1 2 3
0 0 5
4
3
1
1

Q) Give the algorithm for binary search and determine its time complexity
by the step count method.
Ans:-
SEARCHING OF A BINARY SEARCH TREE: -

Recursive search of abinary search tree

Algorithm search (t,x)


// t is the root
// n is element to be searched
{ if (t=0) return 0;
else if (x=t→ data) then return
else if (x < t → data) then
return search (t→l child, x);
else return search (t→r child, x);
}
Ex: -
t Search (1000, 5)
20 Search (2000, 5)
1000 Search (3000, 5)
Search (n,5)
15 25
2000 5000
2 10 N 11 18 N N 30 N
3000 4000 6000

Iterative Algorithm
Algorithm Search (x)

{ found: = false;
t : = tree;
while (t  0) and not found) do
{ if (x= t → data)} then found:= true;
else if (n<(t → data)} from t:= t → l child;
else t:= (t→r child)
}
if (not found) then return 0;
else return t;
}

Searching According to Rank: -

Algorithm search (k)


{
found:= false, t:= stree;
while (l0) and not found) do
{
if (k=(t→left size)) then found:= true;
else if (k<(t→left size)) then t:= (t→l child);
else
{ k:= k-(f→left size);
t:= (t→r child);
}
}

if (not found) return 0;


else
return t
}

 Left size indicator a value which is equal to height of left sub-tree +1


 The argument ‘k’ is the kth smallest element which is to be determined.

Ex: -
t
20
1000
1

15 N 25
1 2000 5000 1
0 10 N N 18 N N 30 N
3000 4000 6000

k=5
Initial t = 1000
While (t=1000 & true) – true
Else
K: = 5-3= 2
T: = 5000
While (t = 5000 & true) – true
Else
K: = 2-1 -= 1
T: = 6000
While (t=6000 & true) – true
5= 5 – found true
Out of while
Return 6000;

Determine the frequency counts for all the statements in the following
two algorithm segments: -

(1) for I = 1 to n do
for j = 1 to n do
for k: = 1 to n do
n: = n+1

The outer for loop execute n+1 times for the value of i=1 to n and for the value of
each ‘I’ the inner for loop will be executed n+1 times. Similarly for each value of j
the inner most for loop will be executed n+1 times  The no. of step counts for loop
is (n+1) (n+1) (n+1) = )n+1)3

For i= 1 to n the for loop will be true ‘n’ times and becomes false the (n+1) th time 
the statement will execute n x n x n times when t he for loops are true.

 The tot al no. of steps counts will be (n+1) 3 + n3

Q) Write an algorithm to insert an element into a binary search tree?


Ans:-
INSERTION IN TO A BINARY SEARCH TREE: -
To insert a new element we must first search the tree whether an
element is existing with the key which is to be inserted. If search is so successful
then the key is already present and insertion is not possible otherwise. The new key
can be inserted at the point where the search terminates.

Consider a binary search tree with the following keys.

3
0

4
5
0

Suppose if we want to insert an element ‘80’ search is carried out for the element
and the search terminates unsuccessfully at the node ‘40’. As ‘80’ is greater than
‘40’ it is inserted as right child to ‘40’.

3
0

4
5
0

8
2 0
Algorithm insert (n)

// insert x into binary search tree


{
found: = false;
p: = tree;
//search for x. Q is parent of p
While [(p0) and not found) do
{ q: = p; // save p
if [n: = (p→data)] then found: = true;
else if (n<(p→data)] then p:= (p→l child):
else p: = (p→r child);
}
// perform insertion
if (not found) then
{ p: = new tree node;
(p→l child): = 0; (p→r child): = 0
(p→data): = x;
if (true 0) then
{if (x<(q→l child): = p;
else
(q→r child): = p;
}
}
else
tree: = p;
}

Q)Write the algorithm for deletion into a binary search tree with an
example.
Ans:-
DELETION FOROM A BINARY SEARCH TREE: -
When deleting a node from a binary search tree 3 cases arise.
CASE-1: - If the node to be deleted is a leaf node then it can be very easily deleted
by making the link field to the child null in the parent node.

If we want to delete 35

1000 30 4000
1000 30 4000

1000 1000

3000 5 N N 40 6000
3000 5 N N 40 6000
2000 4000
2000 4000

N 2 N N 80 N
N 2 N N 35 N N 80 N

5000 6000 3000


3000 6000
CASE- 2: - Deleting a node which has only one child. This is also quiet straight
forward the node can be deleted by cosigning the address of only child of the node
is to be deleted to the parent of the node which is to be deleted.
If we want to delete the node 5 then cosign the address of only child of 5 i.e. 3000
to the parent of 5 i.e. 1000.

2000 30 4000

1000

3000 5 N N 40 6000

4000
2000

7000 2 N N 80 N

3000 6000

N 1 N

7000

CASE -3: - When the node to be deleted contains 2 children.

Step (i): - Copy the value of largest child in the left sub-tree or smallest child in t he
right sub-tree in to the node which is to be deleted.

STEP (ii): - Copy the address of child of the child (i.e. largest in left sub tree or
smallest in right sub-tree) in to the node which is to be deleted and then delete the
child.

Suppose if we want to delete 30 in figure 1 then

Step – (i) Step – (ii)

7000 7 4000
3000 2 4000

1000
1000

N i N N 40 5000
7000 2 N N 40 5000
7000 4000
3000 4000

N 80 N
N 1 N N 80 N

5000
7000 5000

Analysis: - It the height of binary search tree is ‘n’ the search by key, search by
rank, insertion and deletion all take 0 (n) times.

Q) What is a graph? Explain various concepts related to graph?


Ans:-
GRAPHS:
A graph is a collection of vertical and edges where the no. of vertical is non-empty.

V3 C3 V4
V(G) = { V1,V2,V3,V4} → Vertex Set.
E (G) = {C1,C2,C3,C4,C5,C6} → Coyle set C5
│V(G) │ = 4 → Order of graph

│E(G) │ = 6 → size of graph C4 C2

V(G) = {V1,V2,V3,V4}, c(g) C6


│v(g) │ = 4, │e(g) │ = 0
V1 C1 V2

Directed Graph: - A graph in which there is a specific direction for each edge is
known as directed graph.
Self Loop: - An edge connecting to it self is known as Self Loop

Parallel Edges: - Between any 2 pair of verticals it ---- is more than one edge then
such edges are called as parallel edges.

∑ deg V(G) = 4+3+5+2 = 14


=2x7
= 2 x │E(G) │

deg (V1) = 4 deg (V4) = 2


deg (V2) = 3
deg (V3) = 5

Simple Graph: - A graph in which there are no self loops


or parallel edger is called as a simple graph.

V1
V2 V5

V3 V4

Multi Graph: - A graph which contains either self loop or parallel edger or both is
called as a Multi Graph.

Degree of a Vertex (non directed Graph): - The no/: of edges incident on a


vertex is called as the degree of the Vertex

Note: - (1) The degree of Self Loop is counted as 2


(2) The sum of the degrees of all the verticals in a graph will be
equal to 2
times the no/: of edges.
Degree of vertex (Directed Graph): - The no/: of edger incident to a particular
vertex is called as its in degree. It is denoted by deg + G(V)

The no/: of edges incident from a particular vertex is called as out degree of the
vertex. It is denoted by deg G(V).
Ex: - C5 C4
deg + G(V) deg – G(V)
V4 V3

C7 deg + (V1) = 2 deg - (V1) = 2


C6 C3 deg + (V2) = 2 deg - (V2) = 1
C1 deg + (V3) = 2 deg - (V3) = 3
deg + (V4) = 1 deg - (V1) = 1
V1 V2 ______________ _______________
C2 7 =│E│ 7 = │e│

In a directed graph sum of the in degrees of all the vertices will be equal t o sum of
the out degrees of all the vertices and that will be equal to the no/: of edges.

Path: - A sequence of edger is known a path.

Simple Path: - A path which does not contain the repeation of either edger or
vertices except for the end paths and the length of the path must be atleast one.

Closed Path: - A path in which initial and final vertices are same is called as a
closed path.

Cycle: - A closed path in which there is no repetition of either vertices or edges is


called a cycle.

Circuit: - A closed path in which there is no repetition of edges is called as a circuit.

D J• I

C E F G H
A • • • •

Path Simple Path Closed Path Circuit Cycle


a-d-c-e-f-j-d-a No Yes No No
b-c-e-f-g-j-f-b No Yes Yes No
a-b-a No Yes No No
a-d-c-b-a Yes Yes Yes Yes
i-i Yes Yes No No
e-f-g-j-f-b No No No No

CONNECTED: - Two vertices are said to be connected if there is a path between


the two vertices.
V4 V3 V5 V6
Ex: -

V7
V1 V2
V8 V9

V1 is connected to V2
V1 is connected to V3
V1 is connected to V6
V3 is connected to V7 and so on
V3 not connected to V8
V4 not connected to V8 and so on

CONNECTED GRAPH: - A graph in which every vertex is connected to every other


vertex then it is called as a connected graph.

V4 V3 V5 V6
Ex: -

V7
V1 V2

ADJACENCY: - Two vertices are said to be adjacent to each other if there is a edge
between them.

V4 V3 V5 V6

V7
V1 V2

V1 is adjacent to V4, V2
V1 is not adjacent to V2, V3, V5, V6, V7 and so on

Note: - If two vertices are adjacent then they will be connected the converse need
not be true.

SUB GRAPH: - A graph is called as a sub graph of another graph G if V (H) ≤ V (G)
E (H) ≤ E (G)

Ex: - V4 V3 V5 V6 V4 V3 V5 V6
• • • •

V7 • • •
V1 V2 V1 V2 V7
‘G’ H1

Not a Sub Graph: -

V4 V3 V5 V6

V1 V2 V7

TREE: - a simple graph without any cycle is called as a tree


(Or)
A graph in which there is unique path between any pais of vertices is called
as a tree.

SPANNING TREE: -

A subgraph ‘H’ of a graph G is called as a spanning tree if (i) H includes all the
vertices of G(ii) H is CL Tree.

Ex: -
V6 V5 V4 V6 V5 V4

V8
V7 V3 V7 V3

V8

V1 V2 V1 V2
G

Q) Define graph. Explain the representation mechanisms of a graph?


Ans:-

REPRESENTATION OF GRAPH: -

(1) Adjacency Matrix


(2) Adjacency Lists
(3) Sequential (or) Array representation
(4) Adjacency multi Lists

Directed Graph

1 1
1

2 2
2 3 3

4 4 5 6 7
3

G1 G2 G3
(1) Adjacency Matrix

1 2 3 4 1 2 3 4 5 6 7
1 0 1 1 1 1 0 1 1 0 0 0 0
2 1 0 1 1 2 1 0 0 1 1 0 0
3 1 1 0 1 3 1 0 0 0 0 1 1
4 1 1 1 0 4 0 1 0 0 0 0 0
5 0 1 0 0 0 0 0
G1 6 0 0 1 0 0 0 0
7 0 0 1 0 0 0 0
G2

1 2 3
1 0 1 0
2 1 0 1
3 0 0 0

G3
(2) Adjacency Lists: -
Adjacency lists are representing using nodes which are of type data and link they
also have head nodes it contains only the link nodes and no/: of head nodes will be
equal to the no/: of vertices

1 2 3 3 4 4 Null 1 2 Null

2 1 3 3 4 4 Null 2 1 3 3 Null

3 1 2 2 4 4 Null 3 Null

4 1 2 2 3 3 Null

G1 G3
1 2 3 3 Null 4 1 Null

2 1 4 4 5 5 Null 5 2 Null

3 1 6 6 7 7 Null 6 3 Null

7 3 Null

G2
(3) Sequential (Or) Array representation: - (Only for non directed) To represent
graphs using arrays we need to consider an array of size n+2e+1 where n= no/: of
vertices
e = no/: of edges
G1: - 4+2(6)+1 = 17

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
6 9 12 15 18 2 3 4 1 3 4 1 2 4 1 2 3

Out of band adj. of 1 adj. of 2 adj. of 3 adj. of 4

G2: - &+2(6)+1 = 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
9 11 14 17 18 19 20 21 2 3 1 4 5 1 6 7 2 2 3 3

Adj. of 1 adj. of 2 adj. of 3


Out of band adj. adj. adj. adj.
of 4 of 5 of 6 of 7

ADJACENCY MULTI LISTS: -

G1
N1 1 2 N2 N1

1 N2 1 3 N3 N4
No/: of head nodes = no/: of vertices
2 N3 1 4 0 N5
No/: of lists = No/: of Edges
3 N4 2 3 N5 N6
The Lists are: -
N5 2 4 0 N6
1. N1 → N2 → N3
2. N1 → N4 → N5 N6 3 4 0 0
3. N2 → N4 → N6
4. N3 → N5 → N6

G2
1 N1 1 2 N2 N3
1
2 N2 1 3 0 N5

3 N3 2 4 N4 0
2 3
4 N4 2 5 0 0

5 N5 3 6 N6 0
4 5 6 7
6 N6 3 7 0 0

Vertex Lists:

1. N1 → N2
2. N1 → N3 → N4
3. N2 → N5 → N6
4. N3
5. N4
6. N5
7. N6

G3. 1

Q) Define Priority queue and give an example?


Ans:-
PRIORITY QUEUE: -
A data structure that supports the operations of search minimum (maximum), insert
and delete is called as a priority queue. Ex: - Suppose we have a machine whose
services are to be sold and the same amount is charged from each user irrespective
of the time of usage. Then we can maintain a priority i.e. the user’s with minimum
time will be given more priority.

Contrarily suppose we have a machine which provides a constant service time for
each user where as every user is willing to pay different amounts then t he queue of
users is maintained according to maximum amount as priority.

Q) What is a heap? What are its applications?


Q) Define the data structure ‘Heap’?
Heaps: - A heap is a complete binary tree with the property that the value at each
node is as large as or as small as its children.

If the parent is  to its children then it is called as a max heap.


Max. heap

If the parelel is ≤ its children at each node then it is called as a min heap.

Q) Write an algorithm for insertion of an element


into a heap.
Q) Describe the procedure to insert an element into a heap and explain
with an example.
Q) Describe the procedure to insert an element into a heap with the help
of algorithm.
Ans:-
Construction of Heaps: -
Heaps can be constructed in 2 ways

(1) Incremental Process


(2) Adjusting the complete binary tree
In the incremental process a complete binary tree satisfying the properties of heap
is constructed element by element i.e. an element is always inserted at the end of
the array which already consists of a heap i.e. a n th element is inserted into array
consisting of (n-1) elements representing a heap. And then the entire array
readjusted to represent a heap with (n-0) elements.

Ex: - {40,80,35,90,45,50,70}

(i) 40 (ii) 40 Adjust (iii) 80 (iv) 80

Adjusted

20 40 35 40 35

90

(v) 90 (vi) 9 (vii) 9


0 0 Adjusted
Adjusted

8 3 8 5
80 70 0 0
0 5

4 4 5 4 4 3 7
90 45 0 5 5 0
0 5 0

(viii) 9
0

8 7
0 0

4 4 3 5
0 5 5 0

Algorithm insert (a,n)


{//insert a(n) into heap when is sorted in a ( )
i:=n; item: = a(n);
while [(i>1) and [a(i/2) < item)] do
{a[i]: = a[(i/2)];
i:= [i/2];
}
a[i]: = item;
return ftrye’
}

Observation a → 8
1 2 3 4 5 6 7
0
80 45 70 40 35 50 90

4 7
5 0

4 3 5 9
0 5 0 0
i=7 item = 90
i>1 & a[‘3] < 90 < → True

8
0
1 2 3 4 5 6 7
a→
80 45 70 40 35 50 70
4 7
5 0 i=3
i>1 and a[1] < 90 → True

4 3 5 9
0 5 0 0

9
0 1 2 3 4 5 6 7
a→
80 45 80 40 35 50 70
4 8
5 0
i=1
I > 1 – false

4 3 5 7 a→
0 5 0 0 1 2 3 4 5 6 7
80 45 80 40 35 50 70

Analysis: -
The best case occurs when a new role to be inserted is less than its parents. Then
the No/: of comparisons would be only one.

9
Sample: - Insert
0
60 in to a

4 8
5 0

4 3 5 7
0 5 0 0
1 2 3 4 5 6
a→
80 45 70 40 35 50

8 1 2 3 4 5 6 7
0 a→
80 45 80 40 35 50 60

4 7
5 0

4 3 5 6
0 5 0 0

The worst case the no/: of comparisons will be proportional to the height of the tree
i.e. the insertion of a new element takes 0 (log 2) comparisons in the worst case.
Insert 100 in to a where a is

1 2 3 4 5 6 7
80 45 70 40 35 50 60

8
0 No/: of comparisons = 3
3
2
4 7 Log 8/2 = log2 = 3 log 22 = 3x1 = 3
5 0

4 3 5 6
0 5 0 0

10
0

→ Adjusting the complete binary tree: -

In the second process of constructing a heap we first construct a complete binary


tree with the given data and then we convert it in to heap using the following
algorithms.

Algorithm Heapify (a,n)


// Readjust the elements in a [1:n] to form a heap
{
for i:= [n/2] to 1 step -1
I = 3 do adjust (a,i,n);
}

Algorithm Adjust (a,i,n)


// the complete binary tree with root 2i and 2i+1
// are combined with node I to form a heap
// rooted at I no. node has an address greater
// than n or less than 1.
{
j = 2i; item:= a[i];
While ( j≤n] do
}
if [(jan/and (a[j] < a[j+1])] from j:= j+1;
// compare left and right child
// and let j be the larger child
if (item  a[j] then break;
// if A position for item is found
a [(j/2)]: = a [j]; j= 2j;
a [(j/2)]: = item;
}

Ex: - a → 1 2 3 4 5 6 7
100 119 118 171 112 151 132

10
0
_ heapify (a, 7)
i=3
Adjust (a,3,7)
11 11 J = 6 item = 118
9 7
(6<4) and (a(6)<a(7)) false
a[3] = a[6]
j=12
17 11 15 15 12 ≤ 7 = false out of while
1 2 1 2 a[6] = item

_ heapify will be dec i by 1


adjust (a,2,7
2nd pass: - i = 2 j = 2(i)
10
0
j = 4 j= (2)
item = 119
(4<7) and (a[4] < a/5) – false
11 15 119  171 – false
9 1
a(2) = a(4), j = 8
8 ≤ 7 – false out of loop
a [4] = item
17 11 11 13 10
1 2 8 2 0

11 15
9 1

17 11 11 13
1 2 8 2
- heapify will decrement I by 1
i = 1, j = 2, item = 100
(1<7) and (a[2] < a[3]) – false
item  a[2] – false
a[1] = a[2], j = 4
(4≤ 8) true

17
1

17 15
1 1

11 11 11 13
9 2 8 2

4<7 = true (once again)


100  119 – false
a(2) = a (4)
j = 8 – out of loop
17
a(4) = item
1

11 15
9 1

10 11 11 13
0 2 8 2

Analysis for adjust: -


The adjust algorithm three inputs the array ‘a’ position of the parent node ‘I’ and
size of the array ‘n’.

The adjust algorithm converts the complete binary trees with roots 2i and 2i+1 in to
a heap rooted at ‘I’ by combining node with ‘I’. The algorithm points to the left child
of the node. Which is to be adjusted i.e. j=2i and compares it to the right child and
finally ‘j’ points to the maximum and count both the Childs and is compared with
the parent and if the parent is to be less they are swap (logically). This process is
continued united the entire sub tree is converted to a heap.

The worst case time for the heapify algorithm is O(n). The heapify algorithm is more
efficient to construct a heap as compared to the insert house to inset an element
insert takes log comparison in the worst case and to insert all the in elements it
takes O (nlogn) comparisons. The heapify requires at the most ‘n’ element
comparisons. The worst case time for adjust is also proportional to the huge of the
tree i.e. in worst case adjust takes O(logn 2) element comparisons.

Q) Describe the procedure to delete an element from a heap and explain


with an example.
Q) What is a heap? Explain how to delete an element from the heap.

Ans:- Notes given in class.

Q) Write and explain the algorithm for creation of heap and fins its time
complexity in the worst case.
Q) Write and explain the algorithm for heap sort.
Q) Write an algorithm for heap sort.
Q) Describe about heapsort algorithm.
Q) Give the heap sort algorithm and trace it for an example sorting of ten
items. Q) Given ‘n’ elements stored in an array, it is required to sort them
in non-decreasing order. Write heapsort algorithm and illustrate with the
data {20,30,5,10,25,40,8}.
Q) Develop an algorithm for creating heap and hence explain heapsort
with an example.
Ans:-
Algorithm Heap sort (a,n)
// a[i:n] contains n elements to be sorted
// Heap sort re-arrange then and place
// into non-decreasing order

{ heapify (a,n); // transform the array


// into heap interchange the new for i=n to 2 step-1 do
// max. element with the element at the end of array
{
t:=a [i]; a [i]: = a[i];
a[1] = t;
Adjust (a,1,i-1);
}
}

Simulate the action of heap sort on the following: -


100,119,118,171,112,151,132 17
1

Ex: a → 1 2 3 4 5 6 7
100 119 118 171 112 151 132 11 15
9 1

Heapify (a,7)
Action of heapify on a
10 11 11 13
1 2 3 4 5 6 7 0 2 8 2

100 119 151 100 112 118 132


i=7
t = 132
a [7] = 171
a [1] = 132
a 1 2 3 4 5 6 7
 132 119 151 100 112 118 141

Adjust (a,1,6)
17
1

11 15
9 1

10 11 11 13
0 2 8 2

Action of adjust (a,1,6)


i=6 11 13
t = 118 9 2

a[6] = 151
a[1] = 118
adjust (a,1,5) 10 11 11
0 2 8

1 2 3 4 5 6 7
a→
118 119 132 100 112 151 171

Action of adjust (a,1,5)


1 13
3 2
2

1 1
1 3 11 13
9 2 9 2

1 1
0 1
0 2 10 11
0 2

i=5
t = 112
a[5] = 132
a[1] = 112
Adjust (a,1,4)
1 2 3 4 5 6 7
a 112 119 118 100 132 151 171
Action of adjust (a
11 11
2 9

11 11 11 11
9 8 2 8

10 10
0 0
a=4 1 2 3 4 5 6 7
a 100 112 118 119 132 151 171
t = 100
a(4) = 119
A{1} = 100
Adjust (a,1,3)
i=3
11 t = 100
10 7
0 a [3] = 118
a [i] = 100
11 11 11 10 Adjust (9,1,3)
7 8 2 0

Action adjust (a,1,3)

1 2 3 4 5 6 7
a 100 112 118 119 132 151 171

Action of adjust (9,1,2)

11 i=2 1 2 3 4 5 6 7
2
t = 100 a 100 112 118 119 132 151 171
a [2] = 112
10 a [1] = 100
0

The heap sort algorithm UIES heapify to convert the given array of elements in to a
max heap and it then readjusts the many so that the elements of the array will be
in the sorted order.

Analysis: - The worst car time for heapify is a each invocation of adjust requires O
(logn) comparisons in the worst case.  The worst case time for heap sort is given
by O (n logn)

Determine the no. of step counts in the execution of the following


algorithm. used to compute the nth Fibonacci number

→ Algorithm Fibonacci (n)

// Compute the nth Fibonacci no/:


1,2 [if (n≤ 1) then
3 write (n);
4 else
5,6 [f2:=0; f1: = 1;
7 for i=2 to n do
8,9 [f:=f1: = f2;
10 f2: = f1; f1: = f;
11 }
12 write (f);
13 }
14 }

Case – 1: - The above algorithm has two execution parts. When n= 0 or 1 then the
line 2 and line 3 are executed each requires one step so the no/: of steps will be 2.

Case – 2: - When n>1. Line 2 one step, line 6 no/: of statements per execution is 2
and line 6 is executed once. The no/: of steps is 2. In line 7 for the loop is executed
n-1 times (i.e. 2 to n) and one more time the line 7 is executed when the condition
becomes false. So, the no/: of steps at line 7 is ‘n’. The statement 9 is executed (n-
1) times which require (n-1) steps. The line 10 is executed (n-1) times with 2
statements per execusion requires 2x(n-1) steps. The 12 th line requires 1 step.

 The total no/: of steps will be 1+2+n+(n-1)+ 2(n-1)+1 = 4+n+n-1+2n-2 =


4+2n+2n-3 = 4n +1

→ Algorithm Fibonacci

// print the Fibonacci series


1 {
2 f1: = 0:f2: = 1
3 write (f1) ; write (f2);
4 for 1: = 3 to n do
5,6 { f3:= f1+f2;
7 write (f3);
8 f1:= f2; f2:=f3;
9 }
10 }

Analysis: - Line-2 2
Line-2 3
Line 4
- n-1
Line 6
- n-2
Line 7
- n-2
Line 8
- 2(n-2)
________
5n -5
Q) Define Hashing and explain several hashing techniques?
Ans:-
HASHING: -
Hashing uses a hash function to map keys in to positions in a table called hash
tables. The ideal hash table data structure is an array of some fixed size containing
the keys. When element ‘e’ has the key ‘k’ and if ‘f’ is the hash function then ‘e’ is
stored in the position f(k) of the table. To search for an element with key ‘k’ we
compute f(k) and see it there is an element at the position f(k) of the table. If so the
element is found. Otherwise the table does not contain an element with a given
key.

Each key is mapped in to some umber in t he range 0 to 0 table size-1 and is placed
in the appropriate cell. The mapping is called as a hash function. Which ideally
should be simple to compute and should ensure that any two district keys will get
different cells. Since there are finite no. of cells and a virtually inexhaustible supply
of keys, this is clearly impossible, and thus a hash function is needed which
distributes the keys evenly among the cells.

COLLISION: - When an element is inserted, if it hashes to the same value as an


already inserted element then we have a collision. The two methods used for
resolving the collision are…

(i) Linear Open addressing (Linear probing)


(ii) Separate chaining (Linked probing)

LINEAR OPEN ADDRESSING: -

When a key range is too large we use a hash table whose size is smaller than the
range and a hash function that maps servant different keys into the same position
of the hash table.

If hashing is done using division method then hash function will be of the form f(k)
= k % D where ‘k’ is the key, ‘D’ is the size of the hash table. The positions in the
hash table are indexed from 0 to d-1. Each position is called as a bucket. F(k) is
called as the home bucket for the element with key value ‘k’. Under favorable
circumstances the home bucket is the location of the element with key value ‘k’.
Ex: -

80 40 65
ht -->
0 1 2 3 4 5 6 7 8 9 10

The above example shows a hash table ht with eleven buckets numbered 0-10. The
divisor‘d’ here is ‘11’. ‘80’ is in position ‘3’ because 80% 11 = 3 similarly, 40% 11 =
7 and 65% 11= 10. Each element is in its home bucket. The remaining buckets in
the hash table over empty.

If we wish to enter 58 in to the table then the home bucket will be 7(58) = 58% 11=
3. As the bucket 3 is occupied by 80 cue say a collision has occurred. In general a
bucket may contain space for more than one element, if so a collision may not
create any difficulties. And overflow occurs if there is no room in the home bucket
for the new element. But since in the above example each bucket has space for
only one element, collisions and overflows occur at the same time. Hence to insert
58 we have to search the table for the next available bucket and place it there. This
method of handling overflows is called as linear open addressing.
80 58 40 65
ht -->
0 1 2 3 4 5 6 7 8 9 10

The search for an element is carried out by beginning at the home bucket f(k). If the
element is not found there search tree successive buckets continuously units any of
the following situation is encountered.

(i) A bucket containing an element with key ‘k’ is reached, in which case the
element which we are finding is found.
(ii) An empty bucket is reached
(iii) We return back to the home bucket.

SEPARATE CHAINING: -

0
0
1 81
1
2
3

4 4 64

25
5
14 36
6

8
9 9 49

10

The above hash table the keys are the first ten perfect square and the hashing
function is hash (x )= x mod 10

Whenever a collision occurs the key is placed in a new node and inserted in to the
list of nodes which are colliding a t the home bucket i.e. a linked list is maintained
for each bucket which contains the nodes whose addresses collide with the
corresponding home bucket.

Q) Briefly explain various hashing functions?


Ans:-
TYPES OF HASHING FUNCTIONS:-

1) The Division Method


2) The Mid-square Method
3) The Folding Method
4) Digit Analysis Method
5) The Length Depended Method
6) Multiplicative Hashing

1) Division Method: - H9n) = x mod m+1

In mapping keys to address, the division method preserves, to a certain extent the
uniforming that exists in a key set. Key’s which are closed to each other or
clustered are mapped to unique addresses.

The general it is un common for a number of keys to yield the same reminder where
M is a large prime number.

2) The Mid-square Method: - In this method, a key is multiplied by itself and the
address is obtained by selecting an appropriate no. of bits or digits from the middle
of the square. Usually the number of bits or digits choosen depends on the table
size and consequently can fit in to one computer word of memory. The same
position in the square must be used for all products.

3) Folding Method: - In the folding method a key is partitioned in to a number of


parts each of which has the same length as the required address with the possible
exception of the lout part. The parts are then added together ignoring the final
carry to form an address.

For example the key 35694781 is to be transformed in to a 3 digit addresses then


the key is divided in to parts as 356,942, 781

356
942
781
------
079
------
4) Digit Analysis Method: - A hashing function referred as digit analysis forms
addresses by selecting and shifting digits or bits of the original key.

For example a key 1234567890 is transferred to address 9542 by selecting the


digits in the positions 2459 and reversing their order.

5) Length Dependent Method: - This method is commonly used in table handling


applications. In this method the length of the key is used along with some portion of
the key to produce either a table address directly or an intermediate key is used.

6) Multiplicative Hashing: - The multiplicative hashing function is quick useful for


a non-negative integral key x and constant ‘c’ such that 0<C<1, the function is
defined as H(x) = m (x mod 1) + 1
Here x mod 1 is the fractional part of c x
 Determine the frequency counts for all the statement in the following

(i) for i = 1 to n do (1)


(ii) for j = 1 to I do (2)
(iii) for k = 1 to j do (3)
x: = x+1 stat

(1) (2) (3) (4)

for 2 2 2 1
for 3 2+3 2+(2+3) 1+(1+2)
for 4 2+3+4 2+(2+3) + (2+3+4) 1+(1+2) + (1+2+3)
for 5 . . .
for 6 . . .
for 7 . . .
for 8 . . .
for 9 . . .
for (n+1) ∑n+1

= = =

i.e. (n+1) + = = (2n3 + 12n2+6)

Q) Define a Recursive algorithm? Explain the procedure to convert an


iterative algorithm to a recursive algorithm.
Ans:-
RECURSIVE ALGORITHMS:

An algorithm calling itself continuously until some termination condition is satisfied


is called as recursion.

There are 2 types of recursion

(i) Direct Recursion


(ii) Indirect Recursion

i) Direct Recursion: - A function calling itself is called as direct recursion.

ii) Indirect Recursion: - A function ‘A’ calls another function ‘B’ and the function
‘B’ inturn calls A recursively. This is called as indirect recursion.

Conversion of an iterative algorithm in to a recursive algorithm: -

Any algorithm return by using assignment, if then else, and an iterative loop (for,
while) can be written using assignment, if then else and recursion.

 Write an iterative algorithm to calculate sum of the digits of a given number

Algorithm I sod (n)


// n is the input;
{
sum: = 0;
While (n>0)
{ rem: = no/: 10;
sum: = sum + rem
n: = n/10;
}
write (sum);
}

Algorithm R sod (n)


// n is input
{
sum: = 0;
if (n==0) then
return (sum);
rem: = n % 10;
sum: = sum + rem;
R sod (n/10);
}

 CONVERSION OF RECURSIVE TO ITERATIVE: -

Let ‘P’ denote a recursive function (direct) for which the actual parameters called by
add rest in ‘p’ are the same in every call to p. We can translate ‘P’ in to a non-
recursive function by inserting statement labels and go to statements.

STEP – 1: - Declare a stack to hold local variables, parameters called by value and
flags which indicate from where ‘P’ is called (In case if ‘P’ is called from more than
one place). As the first executed statement of P initialized the stack to be empty by
setting the counters to zero. The stack and the counters must be treated as global
variables.

STEP -2: - To enable each recursive call to start at the beginning of the original
function P, the first executable statement of the original ‘P’ should have a label
attached to it.

The following steps are to be performed at each place inside ‘P’ where
‘P’ calls itself.

STEP -3: - Make a new statement label Li (if this is the i th place where ‘P’ is called
recursively) and attach the label to the first statement after the call to P.

STEP -4: -Push the integer I on to the stack. (This will convey on return that P was
called from the ith place)

STEP: - 5: - Push all the local variables and the parameters called by value on to
the stack.

STEP – 6: - Set the dummy parameters called by value to the values given in the
new call to ‘P’

STEP – 7: - Replace the call to ‘P’ with a “goto” to the statement label at the start
of P. At the end of P (or where eves P returns to its calling program the following
steps should be done.
STEP – 8: - If the stack is empty then the recursion has finished and make a normal
return.

STEP – 9: - Otherwise pop the stack to restore the values of all local variables and
parameters called by value.

STEP – 10: - Pop an integer I from the stack and use this to go to th statement
labeled L i .
Explain how do you convert recursive procedure b a non-recursive equivalent
procedure?

A. Before moving to the process of converting a recursive fun to non-recursive.


It would be much better of the mechanism, the commonalities and
differences hardware then are understood first.

RECURSIVE FUNCTION: -

As name itself says a recursive is a function which calls itself.

A function in said to be direct recursive off it calls itself in one of its


statements.
In the other hand, in indirect recursive a function ‘A’ calls another function ‘B’
which is turn calls function ‘A’. This process of calling itself continues until a
lose condition in reached.
i.e. a problem is subdivided into many parts and each part is assigned to
each version of the function.
The subdivision of the main problem continues until a base condition is
reached, one’s the box condition is reached, the result produced by it is given
back to the earlier version of the function.
The result produced by this version is again given back to its calling fun. This
is known as a divider and conquest strategy.
That means, the main problem in divided into smaller path and later those
paths are combined to produce cooperative res.

ITERATIVE FUNCTION: -

An iterative function repeats the execution of the statement written in its body until
a specific condition is reached.

 It goes has 3 parts (for ) Loop.

i) The first part initializes a control variable to a specific value.


ii) The second part cheeks for the condition. If the condition, ------------------
there only the body is executed else not executed.
iii) The third clear which the incremental/ decremented

Q) Distinguish between Recursion and iteration with an example.


RECURSION ITERATION
1) It is based on control structure 1) It is also based on control structure
2) It uses the control structure such as 2) It uses the repetitive structure such
if, if else, switch as for, while (or) do- while which are
loops
3) It involves repetitive by calling itself 3) It involves repeation by explains
using repetitive structure
4) It gets terminated when the be use 4) It gets terminates when loop
care is reached condition is failed
5) It gets infinite if the recursion step 5) It gets intonate if the loop
does not reduce the problem termination condition does not fail

RECURSIVE FUNCTION ITERATIVE FUNCTION


Algorithm Algorithm fibonacci (n)
//compare the nth fibonacci number
Recursive : - fibonacci (n) {
{ if (n≤1) then
if [(n=0):: (n=1) write (n);
return n; else
fibonacci (n-2); if1 = 0; f2 = 1;
else for i: = 2 to n do
return Recursive–function (n-1) + {
Recursive fi = f1+f2;
} f2:fi: f1 = fi;
fibonacci numbers using recursion }
write (fi);
}

OTHER IMPORTANT QUESTIONS: -

1. What is an adjacency matrix explain with the help of an example.


2. What are the different representations of graphs? Discuss with
suitable examples.
3. Briefly explain about adjacency matrices and lists.
4. Explain with the help of an example the two ways of representing
graphs.

Reference Books:-

1. Fundamentals of Computer Algorithms, Horowitz & Sartaj Sahani


BEST OF LUCK

You might also like