Daa 1
Daa 1
Introduction
Q) Define an Algorithm. What are the properties of algorithms?
Ans
Algorithm :(Defn.):-
Characteristics of an algorithm(properties):-
Designing Algorithms:-
Validation of algorithms:-
Analyzing Algorithms:-
Profiling is the process of executing a correct program on data sets with real
time data and measuring the time and space complexities. This is also called as
performance profile. These timing figures are useful as they may confirm and point
out logical places to perform useful optimization. Profiling is done on programs that
are devised, coded, proved correct and debugged on a computer.
Algorithm Specification:-
PseudoCode Conventions:-
Node=record
{
Datatype-1 data1;
Example:
A: For i:=1 to n step 1 do
X := n+y;
B: for i:=1 to n do
For j:=1 to n do
X := x+y;
In program segment ‘A’ the step count is one and the frequency count
is ‘n’. similarly for program segment ‘B’ step count is one and frequency count is
(n*n)=n2.
Performance Analysis:-
The space complexity of the program is the amount of free memory the
program needs to run to completion. The space complexity of a program can be
used to make decisions about memory such as whether sufficient memory is
available to run the program or not. If the system is a multiprocessing system then
the space complexity becomes an important aspect.
The time complexity of the program is the amount of time needed by the
program to run to completion. The time complexity of the program is calculated for
the following reasons.
Space Complexity:-
The memory requirements for most of the programs will be as follows –
Instruction Memory/Space
Data Memory/Space
Environmental Stack
Instruction Memory:- It is the space needed to store the compiled version of the
program instruction. This depends on factors such as compilers, computer options
etc. Nature of this space is static.
Data Memory:- It is the memory needed to store all the constants and variable
values. It also includes the memory allocated for dynamic memory allocation
process. The amount of space required for a structure variable can be obtained by
adding the space requirements of all its components.
Time Complexity:-
The time complexity of a program mainly depends upon speed of the system
and the number of instructions present in the program. It also depends on the
length of each program instruction.
Let T(P) be the time taken for program ‘P’. It will be both compilation time and
runtime. But the compilation of a program is needed only at the beginning of the
program. Once the program is compiled there is no need to recompile the program
unless any changes are made to the program. So, after the first time generally, the
program is executed without compilation. Hence for the first time
It depends upon no. of factors and all of those are to be considered before
calculating the runtime. For this purpose two techniques are adopted.
A Program step is defined as a syntactically or semantically meaningful
segment of a program that has an execution time independent of the instance
characteristics.
1. Operation counts
2. Step counts
Operation Counts:-
The time complexity of a program can be calculated by using counts for the
same type of operations like no. of additions, subtractions, multiplications etc.,
Step Counts:-
The drawback in operation counts method is that the time complexity of only
the selected operations is considered and the time spent on other instructions and
operations is omitted. In step count method we can calculate the time spent on all
parts of the program. The step count is the function of instance characteristics
(variables). So during this process we have to calculate counts per variable. After
relevant instance characteristics have been selected we can define a step. A Step
is any computation unit that is independent of the characteristics. When analyzing a
recursive program step counts are calculated using recursive formulas.
When the chosen parameters are not sufficient to determine the step counts
then we make use of three kinds of step counts.
1. The best case step count is the minimum no. of steps that can be executed
for the given parameters.
2. The worst case step count is the maximum no. of steps that can be executed
for the given parameters.
3. The average step count is the average no. of steps executed on instances
with the given parameters.
Asymptotic Notations:-
Important reasons for determining the operation and step counts are
1. To compare the time complexities.
2. To predict the growth in runtime by changing inputs.
Neither of these counts yield to an accurate measure of time complexity
because when we use operation counts, we focus only on certain key operations
and ignore all of the other operations. Similarly while using step counts we only
consider certain variables and ignore other aspects of the program.
There is a notation that will enable us to make meaningful statements about
time and space complexities. These notations are called as Asymptotic Notations.
These notations are used to describe the behavior of the time and space
complexities.
The most commonly used notations are O(Big “oh”), o(Small
”oh”),W,w(Omega), q(Theta).
O(Big “oh”):- This notation provides an upper bound for the functions. It can be
defined as
The function f(n)=O(g(n)) (read as “f of n is big oh of g of n”) if there exist
positive constants c and n0 such that f(n)≤c*g(n) for all n, n≥ n 0.
f(n) is the function of time and space complexities and g(n) is the function of step
counts or operation counts. Some of the most commonly used notations are –
W (Omega):- This notation is used to provide lower bound for the functions. It can
be defined as –
The function f(n)= W(g(n)) (read as “f of n is omega of g of n”) if there exists a
positive constant c and n0 such that f(n)≥c*g(n) for all n, n≥ n 0.
q(Theta):-
This notation is used when the function ‘f’ is bounded by upper and lower limits.
The function is defined as –
The function f(n)= q(g(n)) (read as “f of n is theta of g of n”) if there exist
positive constants c1,c2, and n0 such that c1g(n) ≤ f(n)≤ c2 g(n) for all n, n≥ n0.
O(small oh):-
w(Little omega):- Left for the purpose of student’s study.
Stack:-
Stacks are used to implement LIFO mechanisms. Insertions and deletions can
be done only from one end called the top. To implement a stack we need an array
and a variable top.
top
4
3
2
1
0
{
if (top ≥ n-1) then
{
write (“ stack is full”);
return false;
}
else
{
top:=top+1;
stack[top]:=item;
return true;
}
}
Analysis:-
The time taken for inserting an element into the stack will be constant
because the same set of instructions (top=top+1 and stack [top] =item) are
executed every time we insert an element. Also the memory requirement for each
and every element would be the same. Therefore the complexity of push
operation can be given as O(1).
Pop Operation:-
{
if (top < 0) then
{
write (“ stack is Empty”);
return false;
}
else
{
item:=stack[top];
top:=top-1;
return true;
}
}
Analysis:-
The time taken for deleting an element from the stack will be constant
because the same set of instructions is executed every time we delete an
element. Also the memory requirement for each and every element would be the
same. Therefore the complexity of pop operation can be given as O(1).
Q) What is a Queue?
Queues:-
Queue is a FIFO structure in which insertions are done from one end (Rear)
and deletions from the other end (Front). An array is taken to implement a Queue
and two integers to represent Front and rear.
When the queue is empty, F<0 and R<0.
When the Queue is full, R ≥ size-1.
Both insertion and deletion can be done on Queues.
Design:-
Algorithm QInsert (Q, Front, Rear, n, Item)
//Insert is used to insert an element into Queue.
//n is the size of Queue
//item is used as input to hold the element to be inserted into queue.
//Front and Rear have been set to -1 prior to the first invocation.
//algorithm returns true if successful, else returns false.
{
if ((Rear < 0)AND(Front=Rear)) then
{
write (“ Queue is Full”);
return false;
}
{
}
}
Analysis:-
f r
r 0
5 1
60
30
50
4 2
40
f
3
If we want to insert an element now we have to reinitialize r=o and insert the
element at Q(r), we insert 60, 70 linearly it looks like.
0 1 2 3 4 5
3 6
60 70 40 50
0 0
r f
(-1+1) % size
0 % size
0%6
0=f
(r+1)% size =f
(r+1)% size =f will be equal to f in two cases
(i) When circular queue is full
(ii) When circular queue is empty
* If (((r+1) % size ==f) && (r!=-1)) then circular queue is full.
If ((( r+1)% size ==f) && (r=-1)) Circular queue is empty
Circular queue also follows FIFO and we need an array and two variables. The
operations which can be performed on circular queue are insertions and deletions.
Deletion: - If (r+1)% size ==f and r==-1 or f==0 and r==1 then queue is empty,
deletion not possible.
Else if, f==size-1 reinitialize f=0 to delete the last element. Else if f==r queue
contains only one element reinitialize f==0 & r==-1 to delete that element else,
increment ‘f’ by 1.
Design:-
Algorithm insert (item)
//insert an item into queue
//return five if successful, else return false
//item is used as input
{ if (((r+1) mod SIZE) ==f and (r1=-10 then
{ write (“circular queue is full”)
return false;
}
else
if (r==size-1)
{r: = 0;
Queue [r] = item;
return true;
}
else
{r:=r+1
queue [r] = item;
return true;
}
}
Analysis:- Since the time and memory requirement for true insertion of an element
in to queue is constant. The time and space complexities for the insert algorithm
will be O(1).
Design:-
Algorithm Delete (item)
//delete an element from Queue
//return true if successful, else return false
//item is used as output
{ if((r+1)mod size)==f and (r==1) then
{ write (“circular queue is empty”)
return false;
}
else
if (f==r) then
{ item = queue(f);
f=0;
r=1;
return true;
}
else
if (f==size-1) then
{ item = queue(f);
f=0;
return true;
}
else
{ item = queue(f);
f:=f+1;
return true;
}
}
Analysis:- Since the time requirements for the deletion of an element is constant.
The time complexity for the delete algorithm will be 0(1)
B C D
E F G H
I J K L M N O P
5. Depth of a node:- The depth of a node is the length of its path up to the root.
In the tree shown above, node E has depth 2 and node M has depth 3. The root
itself has depth 0.
6. Level of a tree:- The level of a tree means all the nodes at a given depth. In
the tree with root A shown above, level 2 consists of the set of node {E, F, G, H}.
7. Height of a tree:- The height of a tree is the greatest depth among all its
nodes. The tree shown above has height 3.
8. Singleton Tree:- The tree whose root is its only node is called a singleton tree.
The height of a singleton tree is Zero.
9. Empty tree:- The tree with zero nodes is called the empty tree. The height of
an empty tree is defined to be -1.
10. Degree of a node:- The degree of a node is the number of its children. In the
above example, B has degree 1, D has degree 0, and H has degree 5.
11. Degree of a tree:- The degree of a tree is the maximum of its element
degree.
12. Ancestor:- For each node x, let P(x) denote the path from x to the root of the
tree. For example in the tree shown here, P(M)=(M,H,C,A). Except for x itself, the
nodes in P(x) are called the ancestors of x. For example, H, C, and A are the
ancestors of M. The root of a tree is the ancestor of all other nodes, and it is the
only node that has no ancestors.
13. Descendant:- We say that x is a descendant of y if y is an ancestor of x. In
this example, M is a descendant of C, so are F, G, H, K, L, N, O and P. All nodes
except the root itself are descendants of the root node.
Q) Briefly explain about Binary Trees.
Q) What is a Binary Tree?
Binary Tree:-
A binary tree T is defined as a finite set of elements, called nodes such that
T is empty(called the null tree or empty tree) or
T contains a distinguished node R, called the root of T, and the
remaining nodes of T form an ordered pair of disjoint binary trees T 1 and
T2.
(or)
A Binary tree is a finite (possibly empty) collection of elements. When the
binary tree is not empty, it has a root element and the remaining element(if any)
are partitioned into two binary trees, which are called the left and right sub trees
of T.
Each element in a binary tree has Each element in a tree can have any
exactly Two sub trees number of sub trees.
The sub trees of each element in a
binary tree are ordered as left and The sub trees in a tree are not ordered.
right sub trees.
1 7
1
1
2
5
1 2
5 7
3
2 5
0
4
3
In a binary search tree it is defined as the height of the left sub-tree of the node + 1
3
Ex: - 2
5
2 1
1 2
5 7
1
1 2 3
0 0 5
4
3
1
1
Q) Give the algorithm for binary search and determine its time complexity
by the step count method.
Ans:-
SEARCHING OF A BINARY SEARCH TREE: -
Iterative Algorithm
Algorithm Search (x)
{ found: = false;
t : = tree;
while (t 0) and not found) do
{ if (x= t → data)} then found:= true;
else if (n<(t → data)} from t:= t → l child;
else t:= (t→r child)
}
if (not found) then return 0;
else return t;
}
Ex: -
t
20
1000
1
15 N 25
1 2000 5000 1
0 10 N N 18 N N 30 N
3000 4000 6000
k=5
Initial t = 1000
While (t=1000 & true) – true
Else
K: = 5-3= 2
T: = 5000
While (t = 5000 & true) – true
Else
K: = 2-1 -= 1
T: = 6000
While (t=6000 & true) – true
5= 5 – found true
Out of while
Return 6000;
Determine the frequency counts for all the statements in the following
two algorithm segments: -
(1) for I = 1 to n do
for j = 1 to n do
for k: = 1 to n do
n: = n+1
The outer for loop execute n+1 times for the value of i=1 to n and for the value of
each ‘I’ the inner for loop will be executed n+1 times. Similarly for each value of j
the inner most for loop will be executed n+1 times The no. of step counts for loop
is (n+1) (n+1) (n+1) = )n+1)3
For i= 1 to n the for loop will be true ‘n’ times and becomes false the (n+1) th time
the statement will execute n x n x n times when t he for loops are true.
3
0
4
5
0
Suppose if we want to insert an element ‘80’ search is carried out for the element
and the search terminates unsuccessfully at the node ‘40’. As ‘80’ is greater than
‘40’ it is inserted as right child to ‘40’.
3
0
4
5
0
8
2 0
Algorithm insert (n)
Q)Write the algorithm for deletion into a binary search tree with an
example.
Ans:-
DELETION FOROM A BINARY SEARCH TREE: -
When deleting a node from a binary search tree 3 cases arise.
CASE-1: - If the node to be deleted is a leaf node then it can be very easily deleted
by making the link field to the child null in the parent node.
If we want to delete 35
1000 30 4000
1000 30 4000
1000 1000
3000 5 N N 40 6000
3000 5 N N 40 6000
2000 4000
2000 4000
N 2 N N 80 N
N 2 N N 35 N N 80 N
2000 30 4000
1000
3000 5 N N 40 6000
4000
2000
7000 2 N N 80 N
3000 6000
N 1 N
7000
Step (i): - Copy the value of largest child in the left sub-tree or smallest child in t he
right sub-tree in to the node which is to be deleted.
STEP (ii): - Copy the address of child of the child (i.e. largest in left sub tree or
smallest in right sub-tree) in to the node which is to be deleted and then delete the
child.
7000 7 4000
3000 2 4000
1000
1000
N i N N 40 5000
7000 2 N N 40 5000
7000 4000
3000 4000
N 80 N
N 1 N N 80 N
5000
7000 5000
Analysis: - It the height of binary search tree is ‘n’ the search by key, search by
rank, insertion and deletion all take 0 (n) times.
V3 C3 V4
V(G) = { V1,V2,V3,V4} → Vertex Set.
E (G) = {C1,C2,C3,C4,C5,C6} → Coyle set C5
│V(G) │ = 4 → Order of graph
Directed Graph: - A graph in which there is a specific direction for each edge is
known as directed graph.
Self Loop: - An edge connecting to it self is known as Self Loop
Parallel Edges: - Between any 2 pair of verticals it ---- is more than one edge then
such edges are called as parallel edges.
V1
V2 V5
V3 V4
Multi Graph: - A graph which contains either self loop or parallel edger or both is
called as a Multi Graph.
The no/: of edges incident from a particular vertex is called as out degree of the
vertex. It is denoted by deg G(V).
Ex: - C5 C4
deg + G(V) deg – G(V)
V4 V3
In a directed graph sum of the in degrees of all the vertices will be equal t o sum of
the out degrees of all the vertices and that will be equal to the no/: of edges.
Simple Path: - A path which does not contain the repeation of either edger or
vertices except for the end paths and the length of the path must be atleast one.
Closed Path: - A path in which initial and final vertices are same is called as a
closed path.
D J• I
C E F G H
A • • • •
V7
V1 V2
V8 V9
V1 is connected to V2
V1 is connected to V3
V1 is connected to V6
V3 is connected to V7 and so on
V3 not connected to V8
V4 not connected to V8 and so on
V4 V3 V5 V6
Ex: -
V7
V1 V2
ADJACENCY: - Two vertices are said to be adjacent to each other if there is a edge
between them.
V4 V3 V5 V6
V7
V1 V2
V1 is adjacent to V4, V2
V1 is not adjacent to V2, V3, V5, V6, V7 and so on
Note: - If two vertices are adjacent then they will be connected the converse need
not be true.
SUB GRAPH: - A graph is called as a sub graph of another graph G if V (H) ≤ V (G)
E (H) ≤ E (G)
Ex: - V4 V3 V5 V6 V4 V3 V5 V6
• • • •
V7 • • •
V1 V2 V1 V2 V7
‘G’ H1
V4 V3 V5 V6
V1 V2 V7
SPANNING TREE: -
A subgraph ‘H’ of a graph G is called as a spanning tree if (i) H includes all the
vertices of G(ii) H is CL Tree.
Ex: -
V6 V5 V4 V6 V5 V4
V8
V7 V3 V7 V3
V8
V1 V2 V1 V2
G
REPRESENTATION OF GRAPH: -
Directed Graph
1 1
1
2 2
2 3 3
4 4 5 6 7
3
G1 G2 G3
(1) Adjacency Matrix
1 2 3 4 1 2 3 4 5 6 7
1 0 1 1 1 1 0 1 1 0 0 0 0
2 1 0 1 1 2 1 0 0 1 1 0 0
3 1 1 0 1 3 1 0 0 0 0 1 1
4 1 1 1 0 4 0 1 0 0 0 0 0
5 0 1 0 0 0 0 0
G1 6 0 0 1 0 0 0 0
7 0 0 1 0 0 0 0
G2
1 2 3
1 0 1 0
2 1 0 1
3 0 0 0
G3
(2) Adjacency Lists: -
Adjacency lists are representing using nodes which are of type data and link they
also have head nodes it contains only the link nodes and no/: of head nodes will be
equal to the no/: of vertices
1 2 3 3 4 4 Null 1 2 Null
2 1 3 3 4 4 Null 2 1 3 3 Null
3 1 2 2 4 4 Null 3 Null
4 1 2 2 3 3 Null
G1 G3
1 2 3 3 Null 4 1 Null
2 1 4 4 5 5 Null 5 2 Null
3 1 6 6 7 7 Null 6 3 Null
7 3 Null
G2
(3) Sequential (Or) Array representation: - (Only for non directed) To represent
graphs using arrays we need to consider an array of size n+2e+1 where n= no/: of
vertices
e = no/: of edges
G1: - 4+2(6)+1 = 17
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
6 9 12 15 18 2 3 4 1 3 4 1 2 4 1 2 3
G2: - &+2(6)+1 = 20
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
9 11 14 17 18 19 20 21 2 3 1 4 5 1 6 7 2 2 3 3
G1
N1 1 2 N2 N1
1 N2 1 3 N3 N4
No/: of head nodes = no/: of vertices
2 N3 1 4 0 N5
No/: of lists = No/: of Edges
3 N4 2 3 N5 N6
The Lists are: -
N5 2 4 0 N6
1. N1 → N2 → N3
2. N1 → N4 → N5 N6 3 4 0 0
3. N2 → N4 → N6
4. N3 → N5 → N6
G2
1 N1 1 2 N2 N3
1
2 N2 1 3 0 N5
3 N3 2 4 N4 0
2 3
4 N4 2 5 0 0
5 N5 3 6 N6 0
4 5 6 7
6 N6 3 7 0 0
Vertex Lists:
1. N1 → N2
2. N1 → N3 → N4
3. N2 → N5 → N6
4. N3
5. N4
6. N5
7. N6
G3. 1
Contrarily suppose we have a machine which provides a constant service time for
each user where as every user is willing to pay different amounts then t he queue of
users is maintained according to maximum amount as priority.
If the parelel is ≤ its children at each node then it is called as a min heap.
Ex: - {40,80,35,90,45,50,70}
Adjusted
20 40 35 40 35
90
8 3 8 5
80 70 0 0
0 5
4 4 5 4 4 3 7
90 45 0 5 5 0
0 5 0
(viii) 9
0
8 7
0 0
4 4 3 5
0 5 5 0
Observation a → 8
1 2 3 4 5 6 7
0
80 45 70 40 35 50 90
4 7
5 0
4 3 5 9
0 5 0 0
i=7 item = 90
i>1 & a[‘3] < 90 < → True
8
0
1 2 3 4 5 6 7
a→
80 45 70 40 35 50 70
4 7
5 0 i=3
i>1 and a[1] < 90 → True
4 3 5 9
0 5 0 0
9
0 1 2 3 4 5 6 7
a→
80 45 80 40 35 50 70
4 8
5 0
i=1
I > 1 – false
4 3 5 7 a→
0 5 0 0 1 2 3 4 5 6 7
80 45 80 40 35 50 70
Analysis: -
The best case occurs when a new role to be inserted is less than its parents. Then
the No/: of comparisons would be only one.
9
Sample: - Insert
0
60 in to a
4 8
5 0
4 3 5 7
0 5 0 0
1 2 3 4 5 6
a→
80 45 70 40 35 50
8 1 2 3 4 5 6 7
0 a→
80 45 80 40 35 50 60
4 7
5 0
4 3 5 6
0 5 0 0
The worst case the no/: of comparisons will be proportional to the height of the tree
i.e. the insertion of a new element takes 0 (log 2) comparisons in the worst case.
Insert 100 in to a where a is
1 2 3 4 5 6 7
80 45 70 40 35 50 60
8
0 No/: of comparisons = 3
3
2
4 7 Log 8/2 = log2 = 3 log 22 = 3x1 = 3
5 0
4 3 5 6
0 5 0 0
10
0
Ex: - a → 1 2 3 4 5 6 7
100 119 118 171 112 151 132
10
0
_ heapify (a, 7)
i=3
Adjust (a,3,7)
11 11 J = 6 item = 118
9 7
(6<4) and (a(6)<a(7)) false
a[3] = a[6]
j=12
17 11 15 15 12 ≤ 7 = false out of while
1 2 1 2 a[6] = item
11 15
9 1
17 11 11 13
1 2 8 2
- heapify will decrement I by 1
i = 1, j = 2, item = 100
(1<7) and (a[2] < a[3]) – false
item a[2] – false
a[1] = a[2], j = 4
(4≤ 8) true
17
1
17 15
1 1
11 11 11 13
9 2 8 2
11 15
9 1
10 11 11 13
0 2 8 2
The adjust algorithm converts the complete binary trees with roots 2i and 2i+1 in to
a heap rooted at ‘I’ by combining node with ‘I’. The algorithm points to the left child
of the node. Which is to be adjusted i.e. j=2i and compares it to the right child and
finally ‘j’ points to the maximum and count both the Childs and is compared with
the parent and if the parent is to be less they are swap (logically). This process is
continued united the entire sub tree is converted to a heap.
The worst case time for the heapify algorithm is O(n). The heapify algorithm is more
efficient to construct a heap as compared to the insert house to inset an element
insert takes log comparison in the worst case and to insert all the in elements it
takes O (nlogn) comparisons. The heapify requires at the most ‘n’ element
comparisons. The worst case time for adjust is also proportional to the huge of the
tree i.e. in worst case adjust takes O(logn 2) element comparisons.
Q) Write and explain the algorithm for creation of heap and fins its time
complexity in the worst case.
Q) Write and explain the algorithm for heap sort.
Q) Write an algorithm for heap sort.
Q) Describe about heapsort algorithm.
Q) Give the heap sort algorithm and trace it for an example sorting of ten
items. Q) Given ‘n’ elements stored in an array, it is required to sort them
in non-decreasing order. Write heapsort algorithm and illustrate with the
data {20,30,5,10,25,40,8}.
Q) Develop an algorithm for creating heap and hence explain heapsort
with an example.
Ans:-
Algorithm Heap sort (a,n)
// a[i:n] contains n elements to be sorted
// Heap sort re-arrange then and place
// into non-decreasing order
Ex: a → 1 2 3 4 5 6 7
100 119 118 171 112 151 132 11 15
9 1
Heapify (a,7)
Action of heapify on a
10 11 11 13
1 2 3 4 5 6 7 0 2 8 2
Adjust (a,1,6)
17
1
11 15
9 1
10 11 11 13
0 2 8 2
a[6] = 151
a[1] = 118
adjust (a,1,5) 10 11 11
0 2 8
1 2 3 4 5 6 7
a→
118 119 132 100 112 151 171
1 1
1 3 11 13
9 2 9 2
1 1
0 1
0 2 10 11
0 2
i=5
t = 112
a[5] = 132
a[1] = 112
Adjust (a,1,4)
1 2 3 4 5 6 7
a 112 119 118 100 132 151 171
Action of adjust (a
11 11
2 9
11 11 11 11
9 8 2 8
10 10
0 0
a=4 1 2 3 4 5 6 7
a 100 112 118 119 132 151 171
t = 100
a(4) = 119
A{1} = 100
Adjust (a,1,3)
i=3
11 t = 100
10 7
0 a [3] = 118
a [i] = 100
11 11 11 10 Adjust (9,1,3)
7 8 2 0
1 2 3 4 5 6 7
a 100 112 118 119 132 151 171
11 i=2 1 2 3 4 5 6 7
2
t = 100 a 100 112 118 119 132 151 171
a [2] = 112
10 a [1] = 100
0
The heap sort algorithm UIES heapify to convert the given array of elements in to a
max heap and it then readjusts the many so that the elements of the array will be
in the sorted order.
Analysis: - The worst car time for heapify is a each invocation of adjust requires O
(logn) comparisons in the worst case. The worst case time for heap sort is given
by O (n logn)
Case – 1: - The above algorithm has two execution parts. When n= 0 or 1 then the
line 2 and line 3 are executed each requires one step so the no/: of steps will be 2.
Case – 2: - When n>1. Line 2 one step, line 6 no/: of statements per execution is 2
and line 6 is executed once. The no/: of steps is 2. In line 7 for the loop is executed
n-1 times (i.e. 2 to n) and one more time the line 7 is executed when the condition
becomes false. So, the no/: of steps at line 7 is ‘n’. The statement 9 is executed (n-
1) times which require (n-1) steps. The line 10 is executed (n-1) times with 2
statements per execusion requires 2x(n-1) steps. The 12 th line requires 1 step.
→ Algorithm Fibonacci
Analysis: - Line-2 2
Line-2 3
Line 4
- n-1
Line 6
- n-2
Line 7
- n-2
Line 8
- 2(n-2)
________
5n -5
Q) Define Hashing and explain several hashing techniques?
Ans:-
HASHING: -
Hashing uses a hash function to map keys in to positions in a table called hash
tables. The ideal hash table data structure is an array of some fixed size containing
the keys. When element ‘e’ has the key ‘k’ and if ‘f’ is the hash function then ‘e’ is
stored in the position f(k) of the table. To search for an element with key ‘k’ we
compute f(k) and see it there is an element at the position f(k) of the table. If so the
element is found. Otherwise the table does not contain an element with a given
key.
Each key is mapped in to some umber in t he range 0 to 0 table size-1 and is placed
in the appropriate cell. The mapping is called as a hash function. Which ideally
should be simple to compute and should ensure that any two district keys will get
different cells. Since there are finite no. of cells and a virtually inexhaustible supply
of keys, this is clearly impossible, and thus a hash function is needed which
distributes the keys evenly among the cells.
When a key range is too large we use a hash table whose size is smaller than the
range and a hash function that maps servant different keys into the same position
of the hash table.
If hashing is done using division method then hash function will be of the form f(k)
= k % D where ‘k’ is the key, ‘D’ is the size of the hash table. The positions in the
hash table are indexed from 0 to d-1. Each position is called as a bucket. F(k) is
called as the home bucket for the element with key value ‘k’. Under favorable
circumstances the home bucket is the location of the element with key value ‘k’.
Ex: -
80 40 65
ht -->
0 1 2 3 4 5 6 7 8 9 10
The above example shows a hash table ht with eleven buckets numbered 0-10. The
divisor‘d’ here is ‘11’. ‘80’ is in position ‘3’ because 80% 11 = 3 similarly, 40% 11 =
7 and 65% 11= 10. Each element is in its home bucket. The remaining buckets in
the hash table over empty.
If we wish to enter 58 in to the table then the home bucket will be 7(58) = 58% 11=
3. As the bucket 3 is occupied by 80 cue say a collision has occurred. In general a
bucket may contain space for more than one element, if so a collision may not
create any difficulties. And overflow occurs if there is no room in the home bucket
for the new element. But since in the above example each bucket has space for
only one element, collisions and overflows occur at the same time. Hence to insert
58 we have to search the table for the next available bucket and place it there. This
method of handling overflows is called as linear open addressing.
80 58 40 65
ht -->
0 1 2 3 4 5 6 7 8 9 10
The search for an element is carried out by beginning at the home bucket f(k). If the
element is not found there search tree successive buckets continuously units any of
the following situation is encountered.
(i) A bucket containing an element with key ‘k’ is reached, in which case the
element which we are finding is found.
(ii) An empty bucket is reached
(iii) We return back to the home bucket.
SEPARATE CHAINING: -
0
0
1 81
1
2
3
4 4 64
25
5
14 36
6
8
9 9 49
10
The above hash table the keys are the first ten perfect square and the hashing
function is hash (x )= x mod 10
Whenever a collision occurs the key is placed in a new node and inserted in to the
list of nodes which are colliding a t the home bucket i.e. a linked list is maintained
for each bucket which contains the nodes whose addresses collide with the
corresponding home bucket.
In mapping keys to address, the division method preserves, to a certain extent the
uniforming that exists in a key set. Key’s which are closed to each other or
clustered are mapped to unique addresses.
The general it is un common for a number of keys to yield the same reminder where
M is a large prime number.
2) The Mid-square Method: - In this method, a key is multiplied by itself and the
address is obtained by selecting an appropriate no. of bits or digits from the middle
of the square. Usually the number of bits or digits choosen depends on the table
size and consequently can fit in to one computer word of memory. The same
position in the square must be used for all products.
356
942
781
------
079
------
4) Digit Analysis Method: - A hashing function referred as digit analysis forms
addresses by selecting and shifting digits or bits of the original key.
for 2 2 2 1
for 3 2+3 2+(2+3) 1+(1+2)
for 4 2+3+4 2+(2+3) + (2+3+4) 1+(1+2) + (1+2+3)
for 5 . . .
for 6 . . .
for 7 . . .
for 8 . . .
for 9 . . .
for (n+1) ∑n+1
= = =
ii) Indirect Recursion: - A function ‘A’ calls another function ‘B’ and the function
‘B’ inturn calls A recursively. This is called as indirect recursion.
Any algorithm return by using assignment, if then else, and an iterative loop (for,
while) can be written using assignment, if then else and recursion.
Let ‘P’ denote a recursive function (direct) for which the actual parameters called by
add rest in ‘p’ are the same in every call to p. We can translate ‘P’ in to a non-
recursive function by inserting statement labels and go to statements.
STEP – 1: - Declare a stack to hold local variables, parameters called by value and
flags which indicate from where ‘P’ is called (In case if ‘P’ is called from more than
one place). As the first executed statement of P initialized the stack to be empty by
setting the counters to zero. The stack and the counters must be treated as global
variables.
STEP -2: - To enable each recursive call to start at the beginning of the original
function P, the first executable statement of the original ‘P’ should have a label
attached to it.
The following steps are to be performed at each place inside ‘P’ where
‘P’ calls itself.
STEP -3: - Make a new statement label Li (if this is the i th place where ‘P’ is called
recursively) and attach the label to the first statement after the call to P.
STEP -4: -Push the integer I on to the stack. (This will convey on return that P was
called from the ith place)
STEP: - 5: - Push all the local variables and the parameters called by value on to
the stack.
STEP – 6: - Set the dummy parameters called by value to the values given in the
new call to ‘P’
STEP – 7: - Replace the call to ‘P’ with a “goto” to the statement label at the start
of P. At the end of P (or where eves P returns to its calling program the following
steps should be done.
STEP – 8: - If the stack is empty then the recursion has finished and make a normal
return.
STEP – 9: - Otherwise pop the stack to restore the values of all local variables and
parameters called by value.
STEP – 10: - Pop an integer I from the stack and use this to go to th statement
labeled L i .
Explain how do you convert recursive procedure b a non-recursive equivalent
procedure?
RECURSIVE FUNCTION: -
ITERATIVE FUNCTION: -
An iterative function repeats the execution of the statement written in its body until
a specific condition is reached.
Reference Books:-