Data Structure-1
Data Structure-1
1 In a linear data structure, data elements In a non-linear data structure, data elements are
are arranged in a linear order where attached in hierarchically manner.
each and every element is attached to
its previous and next adjacent.
2 In linear data structure, single level is In non-linear data structure, multiple levels are
involved. involved.
4 Its examples are: array, stack, queue, Its examples are: trees and graphs.
linked list, etc.
5 Applications of linear data structures are Applications of non-linear data structures are in
mainly in application software Artificial Intelligence and image processing
development.
A priority queue is a specialised type of abstract data structure that serves elements based on their
associated priorities. In contrast to a standard queue, where elements are served in the order they are
added, a priority queue prioritises elements with higher priority values, ensuring that they are served
before elements with lower priority values.
Consider a scenario where a hospital emergency room uses a priority queue to manage incoming
patients. Each patient is assigned a priority level based on the urgency of their condition. Higher
priority patients, such as those with critical injuries or life-threatening conditions, are served before
patients with lower priority levels, such as those with minor injuries or non-urgent medical needs.
A threaded binary tree is a type of binary tree data structure where the empty left and right child
pointers in a binary tree are replaced with threads that link nodes directly to their in-order predecessor
or successor, thereby providing a way to traverse the tree without using recursion or a stack.
Threaded binary trees can be useful when space is a concern, as they can eliminate the need for a
stack during traversal. However, they can be more complex to implement than standard binary trees.
There are two types of threaded binary trees.
Single Threaded: Where a NULL right pointers is made to point to the inorder successor (if successor
exists)
Double Threaded: Where both left and right NULL pointers are made to point to inorder predecessor
and inorder successor respectively. The predecessor threads are useful for reverse inorder traversal
and postorder traversal.
The threads are also useful for fast accessing ancestors of a node.
Following diagram shows an example Single Threaded Binary Tree. The dotted lines represent
threads.
Huffman coding is a lossless data compression algorithm that assigns variable-length codes to
characters based on their frequency of appearance. The more frequent a character is, the shorter its
code will be. This results in a compressed representation of the data that takes fewer bits to represent
the original data.
/ \
t a
/\ /\
r s h r
/\ /
s h a
● m: 0101
● a: 00
● h: 101
● r: 110
● s: 111
● t: 001
(5) Define binary search tree.Explain different cases for deletion of a node in
binary search tree. Write function for each case.
Binary Search Tree is a node-based binary tree data structure which has the following properties:
● The left subtree of a node contains only nodes with keys lesser than the node’s key.
● The right subtree of a node contains only nodes with keys greater than the node’s key.
● The left and right subtree each must also be a binary search tree.
In a binary search tree (BST), deleting a node involves three main cases, each with sub-cases based
on the number of children the node has. The three main cases are:
A leaf node is a node that has no children. To delete a leaf node, we simply remove the node from the
tree. This can be done by setting the parent of the leaf node's pointer to the leaf node to NULL
struct Node {
int data;
};
void deleteLeafNode(struct Node *root, struct Node *leafNode) {
if (root->left == leafNode) {
root->left = NULL;
} else {
root->right = NULL;
free(leafNode);
A node with one child is a node that has either a left child or a right child, but not both. To delete a
node with one child, we simply replace the node with its child. This can be done by setting the parent
of the node to point to the node's child.
if (root->left == node) {
root->left = child;
} else {
root->right = child;
free(node);
A node with two children is a node that has both a left child and a right child. To delete a node with
two children, we need to find the inorder successor of the node and replace the node with the inorder
successor. The inorder successor of a node is the smallest node that is greater than the node.
currentNode = currentNode->left;
return currentNode;
DFS (Depth-First Search) is an algorithm for traversing or searching tree or graph data structures. It
starts at the tree root (or an arbitrary node of a graph, sometimes referred to as a 'search key') and
explores the neighbour nodes first, before moving to the next level neighbors.
/\
B C
/\ \
D E F
DFS maintains a stack to keep track of the nodes that have been visited but not yet explored. The
algorithm starts by pushing the root node onto the stack. Then, it repeatedly pops a node off the
stack, explores its unvisited neighbours, and pushes them onto the stack. This process continues until
the stack is empty.
Advantages of DFS:
Disadvantages of DFS:
● DFS can be incomplete, meaning that it may not find all of the nodes in a graph.
● DFS can be space-inefficient, especially for graphs with long cycles.
#include <stdio.h>
struct Node {
int data;
};
if (node == NULL) {
return;
printf("%d\n", node->data);
dfs(node->left);
dfs(node->right);
int main() {
root->data = 10;
leftChild->data = 20;
root->left = leftChild;
root->right = rightChild;
dfs(root);
free(root);
free(leftChild);
free(rightChild);
return 0;
Linked lists can be effectively used to represent and manipulate polynomials. Each node in the linked
list represents a term in the polynomial, with the coefficient stored in the node's data field and the
exponent stored in a pointer to the next node. This representation facilitates efficient polynomial
addition by enabling the direct summation of corresponding term coefficients.
1. Not as efficient for dense polynomials, where most terms are non-zero.
2. Requires more memory compared to other representations, such as arrays.
Collision handling techniques are employed to resolve conflicts that arise when multiple keys are
mapped to the same hash table location. These techniques can be broadly categorised into two types:
1. Linear probing: Explore the next available slot in the hash table.
2. Quadratic probing: Probe the hash table using a quadratic function.
1. Chaining: Store colliding keys in a separate data structure, such as a linked list.
2. Double hashing: Utilise a secondary hash function to probe the hash table.
The selection of a collision handling technique depends on the specific application and the anticipated
load factor of the hash table. Generally, open addressing techniques are more space-efficient than
closed addressing techniques, but they may lead to longer search times.
An expression tree is a tree data structure that represents a mathematical expression. Each node in
the tree represents an operator or an operand. The left and right children of a node represent its
operands, while the data field of the node represents its operator. Expression trees are a convenient
way to represent and evaluate mathematical expressions.
Topological sorting is an algorithm for arranging a set of elements in a linear order based on their
dependencies. It is particularly useful for dealing with directed acyclic graphs (DAGs), where there are
no cycles among the nodes. Topological sorting finds applications in various areas, including
scheduling tasks and resolving dependencies in software packages.
1. Arrays:
An array is a linear collection of elements of the same data type, stored in contiguous memory
locations. Each element is accessed using an index, starting from zero.
2. Linked Lists:
A linked list is a linear collection of nodes, where each node contains data and a pointer to the next
node. This dynamic structure allows for efficient insertion and deletion of elements.
Example:
3. Stacks:
Stack is a linear data structure that follows a particular order in which the operations are performed.
The order is LIFO(Last in first out). Entering and retrieving data is possible from only one end. The
entering and retrieving of data is also called push and pop operation in a stack
Example Browsers use stack data structures to keep track of previously visited sites.
4. Queues:
A queue is a FIFO (First In, First Out) data structure, similar to a waiting line. Elements are added to
the rear of the queue and removed from the front.
Example A real-world example of a queue is a single-lane one-way road, where the vehicle that
enters first will exit first.
5. Trees:
A tree is a hierarchical data structure composed of nodes connected by edges. It consists of a root
node, which is the topmost node, and zero or more child nodes.
Root
├── Documents
│ ├── File1.txt
│ ├── File2.txt
│ └── File3.txt
└── Programs
├── Program1.exe
└── Program2.exe
6. Graphs:
A graph is a collection of nodes (vertices) connected by edges. Unlike trees, graphs can have cycles,
where a node is connected to itself or indirectly through other nodes.
Example: One of the most common real-world examples of a graph is Google Maps where cities are
located as vertices and paths connecting those vertices are located as edges of the graph.
A --- B
| / \
| / \
C --- D --- E
7. Hash Tables:
They are a way to map data of any type, called keys, to a specific location in memory called a bucket.
These data structures are incredibly fast and efficient, making them a great choice for large and
complex data sets
A hash table is a data structure that maps keys to values using a hash function. It provides efficient
insertion, deletion, and lookup operations.
Examples 1.Hash is used for cache mapping for fast access of the data. 2.Hash can be used for
password verification.
1. Insertion: Adding a new element to the data structure. This operation is crucial for building
and maintaining the data structure.
2. Deletion: Removing an existing element from the data structure. This operation is necessary
to modify or update the data structure.
3. Searching: Locating a specific element within the data structure. Searching is essential for
retrieving or manipulating individual data items.
4. Traversal: Visiting each element in the data structure sequentially. Traversal is important for
processing or iterating over all data items.
5. Sorting: Arranging elements in the data structure in a specific order, such as ascending or
descending. Sorting is useful for organizing data for efficient retrieval or analysis.
6. Merging: Combining two or more data structures into a single one. Merging is helpful for
consolidating data from different sources.
7. Updating: Modifying the value of an existing element in the data structure. Updating is
essential for maintaining data consistency and accuracy.
8. Accessing: Retrieving the value of an existing element in the data structure. Accessing is
fundamental for retrieving and using data items.
9. Counting: Determining the number of elements in the data structure. Counting is important for
analyzing the size and content of the data structure.
10. Checking for emptiness: Determining whether the data structure contains any elements.
Checking for emptiness is useful for controlling program flow and preventing errors.
These operations form the foundation for various algorithms and applications that rely on data
structures. The specific operations available depend on the type of data structure, such as arrays,
linked lists, stacks, queues, trees, and graphs. Each data structure has its unique characteristics and
capabilities, making them suitable for different use cases.
A Graph is a non-linear data structure consisting of vertices and edges. The vertices are sometimes
also referred to as nodes and the edges are lines or arcs that connect any two nodes in the graph.
More formally a Graph is composed of a set of vertices( V ) and a set of edges( E ). The graph is
denoted by G(V, E).
Representations of Graph
Here are the two most common ways to represent a graph :
1. Adjacency Matrix
2. Adjacency List
Adjacency Matrix
An adjacency matrix is a way of representing a graph as a matrix of boolean (0’s and 1’s).
Let’s assume there are n vertices in the graph So, create a 2D matrix adjMat[n][n] having dimension n
x n.
● If there is an edge from vertex i to j, mark adjMat[i][j] as 1.
● If there is no edge from vertex i to j, mark adjMat[i][j] as 0.
The below figure shows an undirected graph. Initially, the entire Matrix is initialized to 0. If there is an
edge from source to destination, we insert 1 to both cases (adjMat[destination] and
adjMat[destination]) because we can go either way.
The below figure shows a directed graph. Initially, the entire Matrix is initialized to 0. If there is an
edge from source to destination, we insert 1 for that particular adjMat[destination].
Adjacency List
An array of Lists is used to store edges between two vertices. The size of array is equal to the number
of vertices (i.e, n). Each index in this array represents a specific vertex in the graph. The entry at the
index i of the array contains a linked list containing the vertices that are adjacent to vertex i.
Let’s assume there are n vertices in the graph So, create an array of list of size n as adjList[n].
● adjList[0] will have all the nodes which are connected (neighbour) to vertex 0.
● adjList[1] will have all the nodes which are connected (neighbour) to vertex 1 and so on.
The below undirected graph has 3 vertices. So, an array of list will be created of size 3, where each
indices represent the vertices. Now, vertex 0 has two neighbours (i.e, 1 and 2). So, insert vertex 1 and
2 at indices 0 of array. Similarly, For vertex 1, it has two neighbour (i.e, 2 and 1) So, insert vertices 2
and 1 at indices 1 of array. Similarly, for vertex 2, insert its neighbours in array of list.
The below directed graph has 3 vertices. So, an array of list will be created of size 3, where each
indices represent the vertices. Now, vertex 0 has no neighbours. For vertex 1, it has two neighbour
(i.e, 0 and 2) So, insert vertices 0 and 2 at indices 1 of array. Similarly, for vertex 2, insert its
neighbours in array of list.
Depth First Traversal (or DFS) for a graph is similar to Depth First Traversal of a tree. The only catch
here is, that, unlike trees, graphs may contain cycles (a node may be visited twice). To avoid
processing a node more than once, use a boolean visited array. A graph can have more than one
DFS traversal.
Example:
Input: n = 4, e = 6
0 -> 1, 0 -> 2, 1 -> 2, 2 -> 0, 2 -> 3, 3 -> 3
Output: DFS from vertex 1 : 1 2 0 3
Explanation:
DFS Diagram:
1. Dynamic Size: Linked lists can grow or shrink in size as needed, accommodating changes in
the data set without requiring reallocation or reorganization of the entire structure. This
flexibility is crucial for scenarios where the data size is unknown beforehand or fluctuates
during program execution.
2. Efficient Insertion and Deletion: Inserting or deleting elements from a linked list is relatively
straightforward and efficient. Unlike arrays, where shifting elements is necessary for
insertions and deletions, linked lists only require updating the links between nodes. This
efficiency is particularly beneficial for operations involving frequent insertions or deletions.
3. Memory Efficiency: Linked lists allocate memory for individual nodes only when needed,
rather than pre-allocating a fixed block of memory like arrays. This approach can lead to
better memory utilization, especially when dealing with sparse or unpredictable data sets.
4. Easy Implementation of Abstract Data Types: Linked lists are well-suited for implementing
abstract data types (ADTs) like stacks, queues, and graphs, due to their ability to dynamically
add and remove elements and efficiently traverse through the data structure.
5. Efficient Sorting in Certain Cases: For specific sorting algorithms, such as insertion sort or
merge sort, linked lists can outperform arrays in certain scenarios. This is because linked lists
avoid the need to shift elements during sorting, which can be time-consuming for large arrays.
1. Pivot Selection: Choose a pivot element from the list. The pivot can be any element in the list,
but it is often chosen as the median or a random element.
2. Partitioning: Rearrange the elements in the list such that all elements smaller than the pivot
are placed to its left, and all elements larger than the pivot are placed to its right. This process
is called partitioning.
3. Recursive Sorting: Recursively apply Quicksort to the sublists on either side of the pivot. The
sublist to the left of the pivot will contain all elements smaller than the pivot, and the sublist to
the right of the pivot will contain all elements larger than the pivot.
Or
QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that picks an element as
a pivot and partitions the given array around the picked pivot by placing the pivot in its correct position
in the sorted array.
The key process in quickSort is a partition(). The target of partitions is to place the pivot (any element
can be chosen to be a pivot) at its correct position in the sorted array and put all smaller elements to
the left of the pivot, and all greater elements to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its correct position
and this finally sorts the array.
(15) Differentiate between B tree and B+ tree.
Pointers All internal and leaf nodes have Only leaf nodes have data
data pointers pointers
Search Since all keys are not available All keys are at leaf nodes,
at leaf, search often takes hence search is faster and
more time. more accurate.
Insertion Insertion takes more time and it Insertion is easier and the
is not predictable sometimes. results are always the same
Leaf Nodes Leaf nodes are not stored as Leaf nodes are stored as
structural linked list. structural linked list.
Height For a particular number nodes Height is lesser than B tree for
height is larger the same number of nodes
In computer science, a priority queue is an abstract data type (ADT) that prioritizes elements based
on their associated priorities. Unlike regular queues, where elements are served on a first-in, first-out
(FIFO) basis, priority queues serve elements with higher priorities first. This makes them well-suited
for applications where timely processing of critical tasks is essential.
1. Prioritized Retrieval: Elements with higher priorities are retrieved before elements with lower
priorities.
2. Dynamic Insertion and Deletion: Elements can be dynamically inserted and deleted, with their
priorities updated accordingly.
3. Multiple Priority Levels: Elements can have varying priority levels, allowing for fine-grained
prioritization.
1. Heaps: Heaps are a tree-based data structure that efficiently maintains the priority order of
elements.
2. Binary Search Trees (BSTs): BSTs can be modified to support priority queues by associating
priorities with nodes.
3. Unordered Arrays: While less efficient, unordered arrays can be used to implement priority
queues by periodically reordering elements based on priorities.
1. Task Scheduling: Priority queues are used in operating systems to schedule tasks based on
their importance and urgency.
2. Dijkstra's Shortest Path Algorithm: Priority queues are used in Dijkstra's algorithm to
efficiently find the shortest path in a graph.
3. Network Packet Routing: Priority queues are used in network routers to prioritize packets
based on their urgency, ensuring timely delivery of critical data.
4. Event Simulation: Priority queues are used in event simulation to schedule events based on
their occurrence times.
5. Huffman Coding: Priority queues are used in Huffman coding to optimize data compression by
prioritizing frequent symbols.
Priority queues are versatile data structures with a wide range of applications in various domains of
computer science. Their ability to prioritize elements based on their associated priorities makes them
essential for handling time-critical tasks and resource allocation effectively.
(17) What is hashing? What properties should a good hash function
demonstrate?
Hashing is a technique that maps data of arbitrary size to a fixed-size output. This output, called a
hash value or hash code, is used to identify and locate data items efficiently. Hashing is widely used in
various applications, including data structures, databases, and cryptography.
1. Uniqueness: A good hash function should produce unique hash values for different input data
items with high probability. This property is crucial for preventing collisions, where two
different data items map to the same hash value.
2. Efficiency: A good hash function should be computationally efficient, meaning it can calculate
hash values quickly. This property is essential for maintaining performance in applications that
handle large amounts of data.
3. Deterministic: A good hash function should always produce the same hash value for the same
input data item. This property ensures consistency and predictability when retrieving data
using hash values.
4. Uniformity: A good hash function should distribute hash values evenly across the available
hash table size. This property helps minimize collisions and ensures efficient storage and
retrieval of data items.
5. Collision Resolution: A good hash function should provide a mechanism to handle collisions
effectively. This can be achieved using techniques like separate chaining or open addressing.
Hashing plays a fundamental role in various data structures, including hash tables, Bloom filters, and
cryptographic hash functions. It enables efficient data storage, retrieval, and comparison, making it a
valuable tool for managing large datasets and ensuring data integrity.
1. Efficient Memory Utilization: Circular queues make better use of memory by allowing
elements to wrap around the queue instead of requiring a separate block of memory for each
element. This is particularly beneficial when dealing with fixed-size queues, as it prevents
memory wastage.
2. Efficient Insertion and Deletion: Circular queues allow for efficient insertion and deletion
operations, as the front and rear pointers move circularly within the queue. This eliminates the
need to shift elements when inserting or deleting, reducing the time complexity of these
operations.
3. Reduced Memory Allocation: Circular queues avoid dynamic memory allocation, as they use
a fixed-size array or memory block. This can be beneficial for real-time applications where
memory allocation overhead is a concern.
4. Fairness in Resource Sharing: Circular queues can be used to implement fair resource
sharing mechanisms, such as round-robin scheduling. This ensures that all elements in the
queue have an equal opportunity to be processed or accessed.
5. Simplified Implementation: Circular queues are often simpler to implement than linear queues,
as they avoid the complexities of managing the end of the queue and potential wrapping
issues.
Overall, circular queues offer several advantages over linear queues in terms of memory efficiency,
insertion and deletion performance, memory allocation, fairness in resource sharing, and
implementation simplicity. These advantages make circular queues a suitable choice for applications
where memory constraints are a concern, real-time performance is crucial, fair resource sharing is
required, and implementation simplicity is desired.
(a) The key feature of a splay tree is that each time an element is accessed, it is moved to the
root of the tree, creating a more balanced structure for subsequent accesses.
(b) Splay trees are characterized by their use of rotations, which are local transformations of the
tree that change its shape but preserve the order of the elements.
(c) Rotations are used to bring the accessed element to the root of the tree, and also to
rebalance the tree if it becomes unbalanced after multiple accesses.
(b) Trie.
A trie, also known as a prefix tree or digital tree, is a tree data structure used to store and retrieve
strings efficiently. It is a highly specialized data structure that is optimized for searching and pattern
matching. Tries are particularly useful for storing large amounts of text data, such as dictionaries,
lexicons, and spell checkers.
Properties of Tries:
● Tries are self-organizing, meaning that they automatically adjust their structure as new data is
inserted or deleted.
● Tries are memory-efficient, as they only store unique prefixes of strings.
● Tries have efficient search and pattern matching operations, with a time complexity of O(m),
where m is the length of the pattern.
Applications of Tries:
● Spell checkers: Tries are used to implement spell checkers by quickly suggesting correct
spellings for misspelled words.
● Autocomplete: Tries are used to implement autocomplete features in search engines and text
editors, suggesting possible completions as the user types.
● Text compression: Tries can be used to compress text data by identifying and eliminating
redundancies in strings.
● Network routing: Tries are used in network routing tables to efficiently match IP addresses to
their corresponding destinations.
● Structure: A trie consists of nodes and edges. Each node represents a character in a string,
and edges connect nodes that share a common prefix. The root node represents the empty
string.
● Insertion: Inserting a string into a trie involves traversing the tree from the root node, creating
new nodes as needed, and marking the end of the string with a special marker.
● Searching: Searching for a string in a trie involves traversing the tree from the root node,
following the path determined by the characters in the string. If the search reaches the end of
the string, the string is found.
● Pattern matching: Pattern matching involves searching for strings that match a given pattern.
The pattern can be represented as a regular expression or a more general pattern
specification.
Advantages of Tries:
Disadvantages of Tries:
Overall, tries are a powerful and versatile data structure that is well-suited for storing and retrieving
strings efficiently. They are particularly useful for applications that require fast search and pattern
matching capabilities.
Graph traversal is a systematic way of visiting all the nodes (vertices) and edges in a graph. It is a
fundamental technique in graph algorithms and is used for a variety of applications, such as finding
the shortest path between two nodes, checking if a graph is connected, and detecting cycles in a
graph.
1. Breadth-First Search (BFS): BFS explores the graph level by level, starting from a given node
and visiting all its neighbors before moving on to the next level.
Algorithm:
1. Create a queue and add the starting node to it.
2. Mark the starting node as visited.
3. While the queue is not empty: a. Remove the first node from the queue. b. Process
the node (e.g., print its value). c. For each unvisited neighbor of the node: i. Mark the
neighbor as visited. ii. Add the neighbor to the queue.
2. Depth-First Search (DFS): DFS explores the graph as deep as possible along each branch
before backtracking. It starts from a given node and follows a path until it reaches a dead end,
then backtracks to the nearest unvisited node and explores another path.
Algorithm:
1. Create a stack and push the starting node onto it.
2. Mark the starting node as visited.
3. While the stack is not empty: a. Pop the top node from the stack. b. Process the node
(e.g., print its value). c. For each unvisited neighbor of the node: i. Mark the neighbor
as visited. ii. Push the neighbor onto the stack.
● Finding the shortest path between two nodes: BFS is typically used to find the shortest path
between two nodes in a graph because it explores the graph level by level, ensuring that the
shortest path is found first.
● Checking if a graph is connected: DFS can be used to determine whether a graph is
connected or not. If DFS can reach all nodes from any starting node, then the graph is
connected.
● Detecting cycles in a graph: DFS can be used to detect cycles in a graph. A cycle is a path
that starts and ends at the same node. During DFS, if a node is visited multiple times, it
indicates the presence of a cycle.
● Topological sorting: Topological sorting is a way of ordering the nodes in a directed graph
such that, for every directed edge (u, v), node u comes before node v in the ordering. DFS
can be used to perform topological sorting.
Graph traversal techniques are essential tools for understanding and manipulating graphs. They have
a wide range of applications in computer science, including network routing, social network analysis,
and artificial intelligence.
A deque, also known as a double-ended queue, is a linear data structure that supports insertion and
deletion from both ends. This makes it a versatile data structure that can be used to implement a
variety of applications, such as caches, buffers, and queues.
● They are first-in, first-out (FIFO) and last-in, last-out (LIFO) data structures.
● They support insertion and deletion from both ends.
● They can be implemented using a doubly linked list or an array.
(e) B-tree
A B-tree is a self-balancing tree data structure that maintains sorted data and allows searches,
sequential access, insertions, and deletions in logarithmic time. B-trees are a type of multiway search
tree, which means that each node in the tree can have more than two children. This makes B-trees
more efficient than binary search trees, which are limited to two children per node.
Properties of B-trees
● Each node in the tree has a maximum number of keys, denoted by the order of the B-tree.
● Each node in the tree has a minimum number of keys, which is typically half of the maximum
number of keys.
● All keys in a node are sorted in ascending order.
● The root node has a minimum of two keys and a maximum of m keys.
● All other nodes have a minimum of m/2 keys and a maximum of m keys.
● Each leaf node (a node that has no children) contains a list of data values.
Operations on B-trees
Applications of B-trees
● Databases: B-trees are used to index data in databases, which allows for efficient searching
and retrieval of data.
● File systems: B-trees are used to index files in file systems, which allows for efficient
navigation and access to files.
● Networking: B-trees are used to implement routing tables in computer networks, which allows
for efficient routing of data packets.
Advantages of B-trees
● They are self-balancing, which means that they automatically maintain their balance after
insertions and deletions.
● They have a logarithmic search time, which means that the time it takes to search for a key in
a B-tree is proportional to the logarithm of the number of keys in the tree.
● They are efficient for storing large amounts of data on disk.
Disadvantages of B-trees
Example:
Input:
1st number = 5x2 + 4x1 + 2x0
2nd number = -5x1 - 5x0
Output:
5x2-1x1-3x0
Input:
1st number = 5x3 + 4x2 + 2x0
2nd number = 5x^1 - 5x^0
Output:
5x3 + 4x2 + 5x1 - 3x0
(21) explain different types of tree traversals techniques with examples. also
write reversive function for each traversal technique.
Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) which have only one logical way
to traverse them, trees can be traversed in different ways.
A Tree Data Structure can be traversed in following ways:
Tree Traversal
Inorder Traversal:
Algorithm Inorder(tree)
In the case of binary search trees (BST), Inorder traversal gives nodes in non-decreasing order. To
get nodes of BST in non-increasing order, a variation of Inorder traversal where Inorder traversal is
reversed can be used.
Time Complexity: O(N)
Auxiliary Space: If we don’t consider the size of the stack for function calls then O(1) otherwise O(h)
where h is the height of the tree.
Preorder Traversal:
Algorithm Preorder(tree)
Uses of Preorder:
Preorder traversal is used to create a copy of the tree. Preorder traversal is also used to get prefix
expressions on an expression tree.
Time Complexity: O(N)
Auxiliary Space: If we don’t consider the size of the stack for function calls then O(1) otherwise O(h)
where h is the height of the tree.
Postorder Traversal:
Algorithm Postorder(tree)
Uses of Postorder:
Postorder traversal is used to delete the tree. Please see the question for the deletion of a tree for
details. Postorder traversal is also useful to get the postfix expression of an expression tree
Some other Tree Traversals Techniques:
Some of the other tree traversals are:
Level Order Treversal
For each node, first, the node is visited and then it’s child nodes are put in a FIFO queue. Then again
the first node is popped out and then it’s child nodes are put in a FIFO queue and repeat until queue
becomes empty.
Example: Input:
Level Order Treversal:
1
23
45
Boundary Traversal
The Boundary Traversal of a Tree includes:
1. left boundary (nodes on left excluding leaf nodes)
2. leaves (consist of only the leaf nodes)
3. right boundary (nodes on right excluding leaf nodes)
Diagonal Traversal
In the Diagonal Traversal of a Tree, all the nodes in a single diagonal will be printed one by one.
Input :
8 10 14
3 6 7 13
14