0% found this document useful (0 votes)
1 views

Advanced Data Structures - Notes

The document covers advanced data structures and algorithms, detailing various topics such as linked lists, stacks, queues, searching and sorting algorithms, trees, and graphs. It includes a structured curriculum divided into five units, each focusing on specific data structures and their operations, along with algorithms for sorting and searching. Additionally, it provides sample questions and answers related to these topics, emphasizing practical applications and theoretical concepts.

Uploaded by

caleb dharmaraju
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Advanced Data Structures - Notes

The document covers advanced data structures and algorithms, detailing various topics such as linked lists, stacks, queues, searching and sorting algorithms, trees, and graphs. It includes a structured curriculum divided into five units, each focusing on specific data structures and their operations, along with algorithms for sorting and searching. Additionally, it provides sample questions and answers related to these topics, emphasizing practical applications and theoretical concepts.

Uploaded by

caleb dharmaraju
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Advanced Data Structures & Algorithms

UNIT I: Introduction to Data Structures, Singly Linked Lists, Doubly Linked Lists, Circular Lists
Algorithms. Stacks and Queues: Algorithm Implementation using Linked Lists.

UNIT II: Searching-Linear and Binary, Search Methods, Sorting-Bubble Sort, Selection Sort,
Insertion

Sort, Quick Sort, Merge Sort. Trees- Binary trees, Properties, Representation and Traversals (DFT,
BFT),

Expression Trees (Infix, prefix, postfix). Graphs-Basic Concepts, Storage structures and Traversals.

UNIT III: Dictionaries, ADT, The List ADT, Stack ADT, Queue ADT, Hash Table Representation,
Hash Functions, Collision Resolution-Separate Chaining, Open Addressing-Linear Probing, Double

Hashing.

UNIT IV: Priority queues- Definition, ADT, Realizing a Priority Queue Using Heaps, Definition,
Insertion, Deletion .Search Trees- Binary Search Trees, Definition, ADT, Implementation, Operations

Searching, Insertion, Deletion.

UNIT V: Search Trees- AVL Trees, Definition, Height of AVL Tree, Operations-, Insertion, Deletion
and Searching, Introduction to Red-Black and Splay Trees, B-Trees, Height of B-Tree, Insertion,
Deletion

and Searching, Comparison of Search Trees.

Previous paper:
UNIT-I
1. a With neat diagrams, explain the following operations in Singly Linked List data
structure.
i) Insert element at the end
ii) Delete the specified element
b. Write an algorithm for Push and Pop operations of Stack and list out its applications.
OR
2.a.Which operation is more efficient in Doubly Linked List over Singly Linked List?
b Explain in detail the list of operations that can be performed on a Circular Linked Lists with
appropriate diagrams.
UNIT-II
3. a Write the Insertion sort algorithm and explain the step by step procedure of Insertion
Sort method for sorting the following unordered list of elements 25,67,56,32,12,96,82,44.
Trace the steps to search for the element 82 using Binary search.
b Write the BFS algorithm and derive its complexity.
OR
4.a Write and explain the Quick sort algorithm and derive its best, worst and average case
time complexities.
b Briefly discuss various Tree traversal techniques.

UNIT-III
5. a Explain the operations of List ADT. And also specify their applications in real time.
b. What is meant by Hashing in data structure? Why do we need Hashing? Explain about
various types of Hash functions.
OR
6.a Explain the operations of Queue ADT. And also specify their applications in real
b Explain Double Hashing with an example.

UNIT - 4
7.a Explain the implementation of Priority Queue with an example.
b. Explain the delete and search operations in a Binary Search Tree with examples.
OR
8.a Write the properties of Binary Search Trees.
b. Explain the construction of a Binary Search Tree by inserting the elements 13, 3, 4, 12, 14,
10, 5, 1, 8, 2, 7, 9, 11, 6, 18 in the same order, starting from an empty tree.

UNIT-V
9. a Explain how the AVL tree insertion process makes it balanced through various rotations.
b. Explain the properties of Red-Black trees with a neat diagram and discuss its advantages.
OR
10. Construct a B-tree of order 4 for the following list of elements.
1,4, 7, 10, 17, 21, 31, 25, 19, 20, 28, 42
i) Assume the initial B- tree is empty. ii) Insertion should take place in the given order
iii) Show the tree after deleting the three elements 28, 31, 21, 25, and 19 in
sequence.

Answers :
1. Insert Element at the End: Known as "Insertion at Tail." This requires traversing the entire list to find
the last node and then adding the new node after this. The new node's next pointer is set to null,
indicating the end of the list. Since you need to traverse the entire list, this operation has a time
complexity of O(n), where n is the number of nodes in the list.

Steps:

1. Create a new node:

o Allocate memory for the new node and assign the data to it.

o Set the next pointer of the new node to NULL.

2. Check if the list is empty:

o If head == NULL, make the new node the head of the list.

3. Traverse to the last node:

o Start from the head and follow the next pointers until you reach the last node (node->next ==
NULL).

4. Update the last node's next pointer:

o Set the next pointer of the last node to point to the new node.

Diagram:

Copy code

Initial List: [10] -> [20] -> [30] -> NULL

Insert 40:

[10] -> [20] -> [30] -> [40] -> NULL

Pseudocode:

java

Node insertAtEnd(Node head, int data) {

Node newNode = new Node(data);

newNode.next = null;

if (head == null) {

return newNode; // New node becomes the head

Node current = head;

while (current.next != null) {

current = current.next; // Traverse to the last node

current.next = newNode; // Link the last node to the new node


return head;

 2. Delete the Specified Element : To delete a node after a specific node, you update the next pointer
of the preceding node to skip the node to be deleted and point to the following node. Depending on
the position of the node to delete, this can also involve traversing the list, making it O(n).

Steps:

1. Check if the list is empty:

o If head == NULL, there is nothing to delete.

2. Check if the head node is the one to delete:

o If head->data == target, update the head to head->next.

3. Traverse the list to find the target node:

o Maintain a pointer to the previous node.

o If the current->data == target, update prev->next to current->next.

4. Free the memory of the target node:

o Deallocate memory for the deleted node.

Diagram:
Initial List: [10] -> [20] -> [30] -> NULL

Delete 20:
[10] -> [30] -> NULL
Pseudocode:

Node deleteElement(Node head, int target) {

if (head == null) {

return null; // List is empty

// If head node needs to be deleted

if (head.data == target) {

return head.next;

Node current = head;

Node prev = null;

while (current != null && current.data != target) {

prev = current;
current = current.next; // Traverse the list

if (current != null) {

prev.next = current.next; // Bypass the target node

return head;

1b. What is a Stack?

A Stack is a linear data structure that holds a linear, ordered sequence of elements. It is an abstract
data type. A Stack works on the LIFO process (Last In First Out), i.e., the element that was inserted
last will be removed first. To implement the Stack, it is required to maintain a pointer to the top of
the Stack, which is the last element to be inserted because we can access the elements only on the
top of the Stack.
Operation on Stack :

1. PUSH: PUSH operation implies the insertion of a new element into a Stack. A new element is
always inserted from the topmost position of the Stack; thus, we always need to check if the top is
empty or not, i.e., TOP=Max-1 if this condition goes false, it means the Stack is full, and no more
elements can be inserted, and even if we try to insert the element, a Stack overflow message will be
displayed.

Algorithm:
Step-1: If TOP = Max-1

Print “Overflow”

Goto Step 4

Step-2: Set TOP= TOP + 1

Step-3: Set Stack[TOP]= ELEMENT

Step-4: END

2. POP: POP means to delete an element from the Stack. Before deleting an element, make sure to
check if the Stack Top is NULL, i.e., TOP=NULL. If this condition goes true, it means the Stack is empty,
and no deletion operation can be performed, and even if we try to delete, then the Stack underflow
message will be generated.

Algorithm:

Step-1: If TOP= NULL

Print “Underflow”

Goto Step 4

Step-2: Set VAL= Stack[TOP]

Step-3: Set TOP= TOP-1

Step-4: END

Application of the Stack

1. A Stack can be used for evaluating expressions consisting of operands and operators.

2. Stacks can be used for Backtracking, i.e., to check parenthesis matching in an expression.

3. It can also be used to convert one form of expression to another form.

4. It can be used for systematic Memory Management.

OR
Applications of Stack

1. Expression Evaluation and Conversion: 1. Infix to Postfix/Prefix conversion. 2.Evaluation of


Postfix expressions.

2. Function Call Management: Tracks active functions and their states in recursion.

3. Undo/Redo Operations: Used in applications like text editors to store actions.

4. Backtracking: Helps in solving problems like maze navigation or the N-Queens problem.

5. Parsing and Syntax Checking: Verifies the validity of parentheses in expressions.

6. Browser Navigation: Used to implement forward and backward navigation.


2 a.
When is a Doubly linked list more efficient than a singly linked list?

Doubly linked lists are more efficient than singly linked lists in the following cases:

In Bi-directional Traversals:

 Doubly linked lists allow for traversal in both the forward and backward directions, while
singly linked lists only allow for forward traversal.

 This can be useful for applications where it is necessary to traverse the list in both directions,
such as a web browser's history list or a music player's playlist.

Deletion of a given node:

 Since Doubly Linked Lists have two pointers for each node, it is possible to delete a node of
the list without having to traverse the entire list.

 This can be significantly faster than deleting an element in a singly linked list, which requires
traversing the list from the beginning until the element is found because we need previous
pointer to reconnect the singly linked list.

 Doubly linked lists are more efficient than singly linked lists in this case because they have
two pointers for each node, one to the next node and one to the previous node, and we
don't need to traverse the linked list now .

 This allows for direct access of node and its adjacent nodes, which means that this operation
can be performed in O(1) time, regardless of the length of the list.

When implementing other data structures, such as stacks and queues:

 Doubly linked lists can be used to implement stacks and queues very efficiently. In fact,
doubly linked lists are the preferred data structure for implementing these data structures in
many programming languages.

 Few examples, where doubly linked list might be preferred are:

o Deque Operations: Doubly linked lists are used in implementing deque (double-
ended queue) data structures efficiently. Deque supports insertion and deletion from
both ends in constant time with a doubly linked list. This makes operations like
enqueue and dequeue in a queue or push and pop in a stack more efficient when
implemented using a doubly linked list.

o Memory Allocation and Deallocation: Doubly linked lists can be helpful in memory
management scenarios where you need to allocate and deallocate memory blocks
dynamically. When deallocating memory blocks, if you have a reference to a node in
a doubly linked list, you can remove it in constant time, making memory deallocation
more efficient compared to singly linked lists.

o Undo Operations: Doubly linked lists are useful for implementing undo functionality
in applications. Each operation can be stored as a node, and moving backward
(undo) or forward (redo) in the history is efficient with the bidirectional pointers in a
doubly linked list.

Conclusion:

Doubly linked lists are more efficient than singly linked lists in terms of performance, but they
also require more memory. Doubly linked lists are often used in applications where performance
is a critical concern, such as web browsers, operating systems, and databases. However, they are
not as widely used as singly linked lists, which are simpler to implement and require less
memory. The best choice for a particular application will depend on the specific requirements of
that application.

2b.
A Circular Linked List (CLL) is a variation of a linked list where the last node points back to the
first node, forming a circle. This structure can be used with either singly linked or doubly linked
lists.

Here’s an explanation of the operations that can be performed on a Circular Linked List:

1. Traversal

Purpose: Visit and process each node in the circular list.

 Start from the head.

 Keep visiting the next node until you return to the head.

 Algorithm:

1. Initialize current = head.

2. Do:

 Print or process current.data.

 Move to current.next.

3. While current != head.

2. Insertion

Insertion in a circular linked list can occur at different positions:

1. At the Beginning:

o Adjust the last node to point to the new node.

o Update the new node to point to the original head.

o Set the new node as head.

2. At the End:
o Insert the new node after the last node.

o Update the next of the new node to point to head.

o Update the next pointer of the last node to the new node.

3. At a Specific Position:

o Traverse the list to find the desired position.

o Insert the new node by adjusting the next pointers of the surrounding nodes.

3. Deletion

Deletion in a circular linked list also depends on the position:

1. Delete the Head Node:

o Find the last node.

o Update its next pointer to point to the second node.

o Update head to the second node.

2. Delete a Specific Node:

o Traverse the list to find the node just before the one to be deleted.

o Update its next pointer to skip the node to be deleted.

3. Delete the Last Node:

o Traverse the list to find the second-to-last node.

o Update its next pointer to point to head.

4. Searching

Purpose: Find if a specific value exists in the list.

 Traverse the list starting from head.

 Compare each node's value with the target value.

 If you reach head again without finding the value, it is not present.

Advantages of Circular Linked Lists

1. Efficient for tasks that require cycling through data.

2. No need for a NULL pointer; the list never ends.

3. Simplifies implementation of circular queues and buffers.


Example Diagram of a Circular Linked List

Let’s take a circular singly linked list:


Head -> Node1 -> Node2 -> Node3 -> Head

Each node has a data field and a next pointer. The last node’s next points to the head.

3a.
Insertion Sort Algorithm

Insertion Sort is a comparison-based sorting algorithm that builds the sorted list one element at a
time by inserting each new element into its correct position in the already sorted portion.

Algorithm for Insertion Sort

Input: An array A of size n.


Output: The array sorted in ascending order.

1. For i = 1 to n-1: 1.1. Let key = A[i].


1.2. Set j = i - 1.
1.3. While j >= 0 and A[j] > key:
- Shift A[j] to the right (A[j+1] = A[j]).
- Decrement j by 1.
1.4. Insert key at position j+1 (A[j+1] = key).

Sorting the List Using Insertion Sort

Given list: 25, 67, 56, 32, 12, 96, 82, 44

Step-by-Step Procedure

1. Initial list: [25, 67, 56, 32, 12, 96, 82, 44]

2. Pass 1: Compare 67 with 25. Since 67 > 25, no shifting is needed.


Result: [25, 67, 56, 32, 12, 96, 82, 44]

3. Pass 2: Compare 56 with 67 and 25. Shift 67 to the right and insert 56.
Result: [25, 56, 67, 32, 12, 96, 82, 44]

4. Pass 3: Compare 32 with 67, 56, and 25. Shift 67 and 56 to the right and insert 32.
Result: [25, 32, 56, 67, 12, 96, 82, 44]

5. Pass 4: Compare 12 with 67, 56, 32, and 25. Shift all to the right and insert 12.
Result: [12, 25, 32, 56, 67, 96, 82, 44]

6. Pass 5: Compare 96 with 67. Since 96 > 67, no shifting is needed.


Result: [12, 25, 32, 56, 67, 96, 82, 44]

7. Pass 6: Compare 82 with 96. Shift 96 to the right and insert 82.
Result: [12, 25, 32, 56, 67, 82, 96, 44]
8. Pass 7: Compare 44 with 96, 82, 67, 56, 32, and 25. Shift appropriate elements to the right
and insert 44.
Result: [12, 25, 32, 44, 56, 67, 82, 96]

Binary Search for Element 82

Binary Search Algorithm:

1. Start with low = 0 and high = n-1.

2. Repeat until low > high: 2.1. Calculate mid = (low + high) // 2.
2.2. Compare A[mid] with the target: - If A[mid] == target, return mid (element found).
- If A[mid] < target, set low = mid + 1 (search right).
- If A[mid] > target, set high = mid - 1 (search left).

Trace the Steps for Searching 82

Sorted list: [12, 25, 32, 44, 56, 67, 82, 96]

1. Initial state: low = 0, high = 7


Mid: mid=(0+7)//2=3\text{mid} = (0 + 7) // 2 = 3mid=(0+7)//2=3
Compare A[3] = 44 with 82. Since 44 < 82, set low = 4.

2. Second state: low = 4, high = 7


Mid: mid=(4+7)//2=5\text{mid} = (4 + 7) // 2 = 5mid=(4+7)//2=5
Compare A[5] = 67 with 82. Since 67 < 82, set low = 6.

3. Third state: low = 6, high = 7


Mid: mid=(6+7)//2=6\text{mid} = (6 + 7) // 2 = 6mid=(6+7)//2=6
Compare A[6] = 82 with 82. Since A[6] == 82, the element is found at index 6.

3b.
Breadth-first Search (BFS) is a graph traversal algorithm that explores all the vertices of a graph in
a breadthwise order. It means it starts at a given vertex or node and visits all the vertices at the same
level before moving to the next level. It is a recursive algorithm for searching all the vertices of a
graph or tree data structure.

BFS Algorithm

For the BFS implementation, we will categorize each vertex of the graph into two categories:

1. Visited

2. Not Visited

Algorithm :
1. Start by selecting a starting vertex or node.
2. Add the starting vertex to the end of the queue.
3. Mark the starting vertex as visited and add it to the visited array/list.
4. While the queue is not empty, do the following steps:
o Remove the front vertex from the queue.
o Visit the removed vertex and process it.
o Enqueue all the adjacent vertices of the removed vertex that have not been
visited yet.
o Mark each visited adjacent vertex as visited and add it to the visited array/list.
5. Repeat step 4 until the queue is empty.

Complexity Of Breadth-First Search Algorithm :


The time complexity of the breadth-first search algorithm : The time complexity of the breadth-first
search algorithm can be stated as O(|V|+|E|) because, in the worst case, it will explore every vertex
and edge. The number of vertices in the graph is |V|, while the edges are |E|.

The space complexity of the breadth-first search algorithm : You can define the space complexity as
O(|V|), where |V| is the number of vertices in the graph, and different data structures are needed to
determine which vertices have already been added to the queue. This is also the space necessary for
the graph, which varies depending on the graph representation used by the algorithm's
implementation.

4a.
Quick Sort Algorithm

Quick Sort is a divide-and-conquer sorting algorithm. It works by selecting a pivot element,


partitioning the array around the pivot so that all smaller elements are on one side and all larger
elements are on the other, and then recursively sorting the sub-arrays.

Algorithm for Quick Sort

Input: An array A[low…high]A[low \dots high]A[low…high].


Output: The array sorted in ascending order.

1. If low<highlow < highlow<high:


1.1. Partition the array A[low…high]A[low \dots high]A[low…high] into two parts: - Elements
smaller than or equal to the pivot go to the left. - Elements larger than the pivot go to the
right. 1.2. Get the partition index ppp. 1.3. Recursively apply Quick Sort to A[low…p−1]A[low
\dots p-1]A[low…p−1] and A[p+1…high]A[p+1 \dots high]A[p+1…high].

Partition Function

The partition function rearranges the array and determines the position of the pivot.

1. Choose a pivot (usually the last element in the sub-array).


2. Initialize two pointers: iii and jjj:

o iii tracks the boundary of smaller elements.

o jjj iterates through the array.

3. For each element:

o If A[j]≤pivotA[j] \leq \text{pivot}A[j]≤pivot, increment iii and swap A[i]A[i]A[i] with


A[j]A[j]A[j].

4. Finally, place the pivot in its correct position by swapping it with A[i+1]A[i+1]A[i+1].

Time Complexity Analysis

1. Best Case (O(nlog⁡n)O(n \log n)O(nlogn)):

o Occurs when the pivot divides the array into two nearly equal halves at each step.

o The recursion depth is log⁡n\log nlogn, and at each level, O(n)O(n)O(n) work is done
for partitioning.

o Total time: O(nlog⁡n)O(n \log n)O(nlogn).

2. Worst Case (O(n2)O(n^2)O(n2)):

o Happens when the pivot is always the smallest or largest element.

o The recursion depth becomes nnn, and partitioning at each level takes O(n)O(n)O(n).

o Total time: O(n2)O(n^2)O(n2).

3. Average Case (O(nlog⁡n)O(n \log n)O(nlogn)):

o For a random pivot, the partitioning generally produces sub-arrays of roughly equal
size.

o The expected recursion depth is log⁡n\log nlogn, and the total time is O(nlog⁡n)O(n \
log n)O(nlogn).

Space Complexity

 Space usage depends on the recursion depth.

 In the best and average cases: O(log⁡n)O(\log n)O(logn), due to balanced recursion.

 In the worst case: O(n)O(n)O(n), if recursion depth is maximized.

Summary of Complexities

Case Time Complexity Space Complexity

Best Case O(nlog⁡n)O(n \log n)O(nlogn) O(log⁡n)O(\log n)O(logn)


Case Time Complexity Space Complexity

Worst Case O(n2)O(n^2)O(n2) O(n)O(n)O(n)

Average Case O(nlog⁡n)O(n \log n)O(nlogn) O(log⁡n)O(\log n)O(logn)

4b.

Tree traversal is a process of visiting (reading or updating) each node in a tree data structure in a
specific order. The traversal techniques can be classified into two main types: Depth-First Traversal
(DFT) and Breadth-First Traversal (BFT).

1. Depth-First Traversal (DFT)

This involves exploring as far as possible along a branch before backtracking. It includes the following
methods:

a. Inorder Traversal (Left-Root-Right)

 Traverse the left subtree.

 Visit the root node.

 Traverse the right subtree.

 Use Case: Used in binary search trees (BSTs) to retrieve elements in sorted order.

b. Preorder Traversal (Root-Left-Right)

 Visit the root node.

 Traverse the left subtree.

 Traverse the right subtree.

 Use Case: Used to create a copy of the tree or to prefix mathematical expressions.

c. Postorder Traversal (Left-Right-Root)

 Traverse the left subtree.

 Traverse the right subtree.

 Visit the root node.


 Use Case: Used to delete a tree or to evaluate postfix expressions.

2. Breadth-First Traversal (BFT)

This involves visiting all the nodes at one level before moving to the next level. It is typically
implemented using a queue.

a. Level-Order Traversal

 Visit nodes level by level from top to bottom and left to right.

 Use Case: Used in shortest path algorithms, such as BFS in graph theory.

Key Differences

Traversal Type Method Order of Nodes Visited

Depth-First Inorder Left → Root → Right

Preorder Root → Left → Right

Postorder Left → Right → Root

Breadth-First Level Order Top level → Bottom level (level by level)

5a.
Operations of List ADT

The List Abstract Data Type (ADT) represents a collection of ordered elements, where the same
element may occur multiple times. It supports various operations for manipulating the collection.

Key Operations of List ADT

1. Insertion

o Adds an element at a specified position in the list.

o Can be done at the beginning, end, or any index.

2. Deletion

o Removes an element from a specified position in the list.

o Can involve shifting elements to maintain order.

3. Traversal

o Visits and processes each element in the list in a sequential manner.


4. Search

o Finds and returns the position of a specific element if it exists in the list.

5. Access/Update

o Retrieves or modifies the value of an element at a specified index.

6. Concatenation

o Combines two lists into one.

7. Sorting

o Arranges the elements in ascending or descending order.

8. Length/Size

o Returns the number of elements in the list.

Applications of List ADT in Real-Time

1. Task Scheduling

o Used to maintain a list of tasks in project management tools like Trello or Asana.

2. Playlist Management

o Stores and manages a collection of songs in a media player.

3. Text Editing

o Handles a sequence of characters or words in a text editor, where users can insert,
delete, or modify content.

4. Shopping Cart in E-commerce

o Maintains a list of items added by the user for purchase.

5. Navigation Systems

o Stores a sequence of locations or routes to guide a user.

6. Event Handling

o Keeps a list of events (e.g., user actions) to process in applications.

7. Databases

o Stores and manages records in tables or collections in database systems.

8. Social Media Feeds

o Displays a sequential list of posts, comments, or notifications.

9. Inventory Management

o Tracks a list of items in warehouses or stores.


10. Gaming Leaderboards

o Maintains a sorted list of players based on their scores.

5b.
What is Hashing in Data Structures?

Hashing is a technique used in data structures to map data (keys) to a fixed-size table (hash table)
using a hash function. The goal of hashing is to enable fast data retrieval, insertion, and deletion
operations, often in constant time O(1)O(1)O(1).

Why Do We Need Hashing?

1. Fast Access: Hashing provides efficient access to data compared to linear or binary search.

2. Memory Optimization: A hash table uses a fixed-size array, avoiding excessive memory
usage.

3. Collision Handling: Hashing allows multiple values to share the same index in the table and
resolves these collisions efficiently.

4. Applications: Used in databases, caching, cryptography, indexing, and more.

Types of Hash Functions

A hash function maps a given key to an index in the hash table. A good hash function minimizes
collisions and distributes keys uniformly across the table.

1. Division (Modulo) Method

 Computes the index as the remainder of the division of the key by the table size.
h(key)=keymod tableSizeh(key) = key \mod tableSizeh(key)=keymodtableSize

 Example: If the key is 505050 and the table size is 777, h(50)=50mod 7=1h(50) = 50 \mod 7 =
1h(50)=50mod7=1.

2. Multiplication Method

 Uses a constant AAA (where 0<A<10 < A < 10<A<1) and computes the index as:
h(key)=⌊tableSize⋅(key⋅Amod 1)⌋h(key) = \lfloor tableSize \cdot (key \cdot A \mod 1) \
rfloorh(key)=⌊tableSize⋅(key⋅Amod1)⌋

 Advantages: Provides better distribution of keys than the division method.

3. Mid-Square Method

 Squares the key and extracts a portion of the digits from the middle to compute the index.

 Example: If the key is 123412341234, square it to get 152275615227561522756. Take the


middle two digits, 272727, and use them as the index.
4. Folding Method

 Divides the key into equal parts, adds them together, and computes the index using modulo
operation.

 Example: For the key 987654987654987654 and table size 100100100:

o Divide into parts 98,76,5498, 76, 5498,76,54.

o Add: 98+76+54=22898 + 76 + 54 = 22898+76+54=228.

o Compute index: 228mod 100=28228 \mod 100 = 28228mod100=28.

5. Universal Hashing

 Uses a randomization approach to reduce collisions. A random hash function is selected from
a family of functions at runtime.

6. String Hashing

 Converts strings into numerical values and hashes them using one of the above methods.

 Common Techniques:

o Polynomial Hashing:
h(s)=(s[0]⋅p0+s[1]⋅p1+...+s[n−1]⋅pn−1)mod tableSizeh(s) = (s[0] \cdot p^0 + s[1] \
cdot p^1 + ... + s[n-1] \cdot p^{n-1}) \mod tableSizeh(s)=(s[0]⋅p0+s[1]⋅p1+...
+s[n−1]⋅pn−1)modtableSize, where ppp is a prime number.

7. Cryptographic Hashing

 Uses secure hash functions like MD5, SHA-1, or SHA-256 for sensitive applications like
password storage and digital signatures.

Good Hash Function Properties

1. Uniform Distribution: Spreads keys uniformly across the table.

2. Minimizes Collisions: Avoids clustering of keys.

3. Deterministic: Produces the same index for the same key.

4. Efficient Computation: Easy and fast to compute.

6a. The queue in the data structure acts the same as the movie ticket counter. Both the ends of this
abstract data structure remain open. Further, the insertion and deletion processes also operate
analogously to the wait-up line for tickets.
Basic Operations for Queue in Data Structure :
 Enqueue() - Insertion of elements to the queue.

 Dequeue() - Removal of elements from the queue.

 Peek() - Acquires the data element available at the front node of the queue without deleting it.

 isFull() - Validates if the queue is full.

 isNull() - Checks if the queue is empty.

Enqueue() Operation

 Step 1: Check if the queue is full.

 Step 2: If the queue is full, Overflow error.

 Step 3: If the queue is not full, increment the rear pointer to point to the next available
empty space.

 Step 4: Add the data element to the queue location where the rear is pointing.

 Step 5: Here, you have successfully added 7, 2, and -9.

Dequeue() Operation

 Step 1: Check if the queue is empty.

 Step 2: If the queue is empty, Underflow error.

 Step 3: If the queue is not empty, access the data where the front pointer is pointing.

 Step 4: Increment front pointer to point to the next available data element.

 Step 5: Here, you have removed 7, 2, and -9 from the queue data str
Peek() Operation

Step 1: Check if the queue is empty.

 Step 2: If the queue is empty, return “Queue is Empty.”

 Step 3: If the queue is not empty, access the data where the front pointer is pointing.

 Step 4: Return data.

isFull() Operation

 Step 1: Check if rear == MAXSIZE - 1.

 Step 2: If they are equal, return “Queue is Full.”

 Step 3: If they are not equal, return “Queue is not Full.”

isNull() Operation

 Step 1: Check if the rear and front are pointing to null memory space, i.e., -1.

 Step 2: If they are pointing to -1, return “Queue is empty.”

 Step 3: If they are not equal, return “Queue is not empty.”

Applications of Queue

Queue, as the name suggests, is utilized when you need to regulate a group of
objects in order. This data structure caters to the need for First Come First Serve
problems in different software applications. The scenarios mentioned below are a
few systems that use the queue data structure to serve their needs –
 Printers: Queue data structure is used in printers to maintain the order of pages
while printing.

 Interrupt handling in computes: The interrupts are operated in the same order as
they arrive, i.e., interrupt which comes first, will be dealt with first.

 Process scheduling in Operating systems: Queues are used to implement round-


robin scheduling algorithms in computer systems.

 Switches and Routers: Both switch and router interfaces maintain ingress
(inbound) and egress (outbound) queues to store packets.

 Customer service systems: It develops call center phone systems using the
concepts of queues.

6b.
Info about double hashing
7a. What is Priority Queue?
A priority queue is a type of data structure where each element has a priority assigned to it. Elements are processed based
on their priority, with higher priority elements being processed before lower priority ones. This is different from a regular
queue where elements are processed in the order they arrive (first-in, first-out).

Applications of Priority Queue

A common application of priority queue data structure is task scheduling, where tasks with higher priority need to be
executed before others. They are also used in graph algorithms like Dijkstra's shortest path algorithm, where nodes with the
lowest cost are processed first.

Real-life examples of priority queues:

 Emergency room patient management

 Task scheduling in operating systems

 Handling events in simulations

 Job scheduling in printing tasks

 Managing processes in computer systems

 import java.util.ArrayList;

 public class PriorityQueue {
 private ArrayList<Integer> heap;

 public PriorityQueue() {
 heap = new ArrayList<>();
 }

 private void heapifyUp(int index) {
 int parentIndex = (index - 1) / 2;
 if (index > 0 && heap.get(index) < heap.get(parentIndex)) {
 int temp = heap.get(index);
 heap.set(index, heap.get(parentIndex));
 heap.set(parentIndex, temp);
 heapifyUp(parentIndex);
 }
 }

 private void heapifyDown(int index) {
 int leftChildIndex = 2 * index + 1;
 int rightChildIndex = 2 * index + 2;
 int smallest = index;

 if (leftChildIndex < heap.size() && heap.get(leftChildIndex) < heap.get(smallest)) {
 smallest = leftChildIndex;
 }
 if (rightChildIndex < heap.size() && heap.get(rightChildIndex) < heap.get(smallest)) {
 smallest = rightChildIndex;
 }
 if (smallest != index) {
 int temp = heap.get(index);
 heap.set(index, heap.get(smallest));
 heap.set(smallest, temp);
 heapifyDown(smallest);
 }
 }

 public void insert(int element) {
 heap.add(element);
 heapifyUp(heap.size() - 1);
 }

 public int remove() {
 if (heap.size() == 0) {
 return -1;
 }
 if (heap.size() == 1) {
 return heap.remove(0);
 }
 int root = heap.get(0);
 heap.set(0, heap.remove(heap.size() - 1));
 heapifyDown(0);
 return root;
 }

 public int peek() {
 if (heap.size() == 0) {
 return -1;
 }
 return heap.get(0);
 }

 public boolean isEmpty() {
 return heap.size() == 0;
 }

 public static void main(String[] args) {
 PriorityQueue pq = new PriorityQueue();
 pq.insert(10);
 pq.insert(5);
 pq.insert(20);
 System.out.println(pq.remove()); // Output: 5
 System.out.println(pq.peek()); // Output: 10
 System.out.println(pq.isEmpty()); // Output: false
 }
 }

7b. A binary search tree (BST) in Data Structures is a sorted binary tree, where we can
easily search for any key using the binary search algorithm. We already have learned binary
search, tree data structure, and binary tree. In this DSA tutorial, we will learn binary search trees,
their operations, implementation, etc. To further enhance your understanding, consider enrolling in
the best DSA training, to gain comprehensive insights into effective data structure utilization for
improved problem-solving and time management.

Special Characteristics of Binary Search Tree :

These properties differentiate a Binary Search tree from a Binary tree.

1. The keys of nodes in the left subtree of a node are less than the node’s key.

2. The keys of nodes in the right subtree of a node are greater than the node’s key.

3. Both subtrees of each node are also BSTs i.e. they have the above two features.
1. Search

To search in a binary search tree, we need to take care that each left subtree has values below the
root and each right subtree has values above the root. If the value is below the root, the value is not
in the right subtree; we need to only search in the left subtree and if the value is above the root, the
value is not in the left subtree; we need to only search in the right subtree

According to the algorithm,

1. Start from the root node.

2. If the root node is null, the tree is empty, and the search is unsuccessful.

3. If the search value is equal to the value of the current node, return the current node.

4. If the search value is less than the value of the current node, go to the left child node.

5. If the search value is greater than the value of the current node, go to the right child node.

6. Repeat steps 3 to 5 until the search value is found or the current node is null.

7. If the search value is not found, return null.

8. If root == NULL
9. return NULL;
10. If number == root->data
11. return root->data;
12. If number < root->data
13. return search(root->left)
14. If number > root->data
15. return search(root->right)
f the value is found, we return the value so that it gets propagated in each recursion step as shown in the image below. If
you might have noticed, we have called return search(struct node*) four times. When we return either the new node or
NULL, the value gets returned again and again until the search(root) returns the final result.

3. Deletion

To delete an element from a BST, the user first searches for the element using the search operation. If the element is found,
there are three cases to consider:

1.The node to be deleted has no children: In this case, simply remove the node from the tree.

2.The node to be deleted has only one child: In this case, remove the child node from its original position and replace the
node to be deleted with its child node.
3.The node to be deleted has two children: In this case, get the in-order successor of that node, replace the node with the
inorder successor, and then remove the inorder successor from its original position.

Complexity Analysis of Binary Search Tree Operations

1. Time Complexity

2. Operations Best Case Average Case Worst Case SpaceComplexity

Operations
Insertion O(log Space
n) Complexity
O(log n) O(n)

Insertion
Deletion O(log O(n)
n) O(log n) O(n)

Deletion
Search O(log O(n)
n) O(log n) O(n)

Search O(n)

8a.
Binary Search Tree Properties

A Binary Search Tree (BST) has specific properties that make it efficient for search, insertion, and
deletion operations:
1. Node Structure

Each node in a BST contains three parts:

 Data: The value stored in the node.

 Left Child: A pointer/reference to the left child node.

 Right Child: A pointer/reference to the right child node.

2. Binary Tree

A BST in data structure is a type of binary tree, which means each node can have at most two
children: left and right.

3. Ordered Nodes

 Left Subtree: For any given node, all the values in its left subtree are smaller than the value
of the node.

 Right Subtree: For any given node, all the values in its right subtree are greater than the
value of the node.

Example:

Consider the following binary search tree:

 The root node is 10.

 The left child of 10 is 5, and all values in the left subtree (3, 5, 7) are less than 10.

 The right child of 10 is 15, and all values in the right subtree (15, 20) are greater than 10.

4. No Duplicate Values

A BST in data structure does not contain duplicate values. Each value must be unique to maintain the
order property.

5. Recursive Definition

Each subtree of a BST is also a BST. This means the left and right children of any node are roots of
their own Binary Search Trees.

Advantages of BST Properties

1. Efficient Searching: Quickly locate elements based on ordered structure.


2. Sorted Data: Easy retrieval of sorted data using inorder traversal.

3. Flexibility: Supports dynamic insertions and deletions.

4. Applications: Useful in databases, file systems, and searching algorithms like binary search.

8b.
Steps:

1. Insert 13 as the root.

2. Insert 3 as the left child of 13.

3. Insert 4 as the right child of 3.

4. Insert 12 as the right child of 4.

5. Insert 14 as the right child of 13.

6. Insert 10 as the left child of 12.

7. Insert 5 as the left child of 10.

8. Insert 1 as the left child of 3.

9. Insert 8 as the right child of 5.

10. Insert 2 as the right child of 1.

11. Insert 7 as the left child of 8.

12. Insert 9 as the right child of 8.

13. Insert 11 as the right child of 10.

14. Insert 6 as the right child of 5.

15. Insert 18 as the right child of 14.

9a.
AVL Tree and Rotations in Insertion

An AVL tree is a type of self-balancing binary search tree where the balance factor (difference in
heights of the left and right subtrees) of every node is either -1, 0, or 1.
When a new node is inserted into an AVL tree, it might violate the balance condition, causing the
tree to become unbalanced. To restore balance, rotations are used.

Steps for Insertion and Balancing in an AVL Tree

1. Insert the Node: Insert the node following the rules of a Binary Search Tree (BST).

2. Calculate the Balance Factor:

o For each node in the path from the inserted node to the root, calculate the balance
factor:

 Balance Factor=Height of Left Subtree−Height of Right Subtree\text{Balance


Factor} = \text{Height of Left Subtree} - \text{Height of Right
Subtree}Balance Factor=Height of Left Subtree−Height of Right Subtree

3. Identify the Unbalanced Node: If the balance factor of any node becomes less than -1 or
greater than 1, the tree is unbalanced.

4. Perform Rotations: Based on the structure of the tree, apply one of the four types of
rotations to restore balance.

Types of Rotations in AVL Trees

1. Right Rotation (Single Rotation)

 Trigger: Left subtree is too tall (Left-Left imbalance).

 Process: Rotate the unbalanced node down to the right, and its left child becomes the new
root of the subtree.

Example: Before Right Rotation:

10

After Right Rotation:

/\

2 10
2. Left Rotation (Single Rotation)

 Trigger: Right subtree is too tall (Right-Right imbalance).

 Process: Rotate the unbalanced node down to the left, and its right child becomes the new
root of the subtree.

Example: Before Left Rotation:

10

15

20

After Left Rotation:

15

/ \

10 20

3. Left-Right Rotation (Double Rotation)

 Trigger: Left subtree is too tall, and the imbalance is in the right child of the left subtree
(Left-Right imbalance).

 Process:

o First, perform a left rotation on the left child of the unbalanced node.

o Then, perform a right rotation on the unbalanced node.

Example: Before Left-Right Rotation:

10

Step 1: Left Rotation on 5:

10

/
7

Step 2: Right Rotation on 10:

/\

5 10

4. Right-Left Rotation (Double Rotation)

 Trigger: Right subtree is too tall, and the imbalance is in the left child of the right subtree
(Right-Left imbalance).

 Process:

o First, perform a right rotation on the right child of the unbalanced node.

o Then, perform a left rotation on the unbalanced node.

Example: Before Right-Left Rotation:

10

20

15

Step 1: Right Rotation on 20:

10

15

20

Step 2: Left Rotation on 10:

15
/ \

10 20

Example: Insertion Process with Rotations

Let’s insert nodes 10, 20, 30 into an empty AVL tree.

1. Insert 10: 10

o No imbalance. Balance Factor is 0.

2. Insert 20:

10

20

o No imbalance. Balance Factor is -1.

3. Insert 30:

10

20

30

o Balance Factor of 10 becomes -2 (Right-Right imbalance). Perform a Left Rotation.

After Left Rotation:

20

/ \

10 30

9b.
Properties of Red-Black Trees
A Red-Black Tree is a type of self-balancing binary search tree that ensures the height of the tree is
logarithmic in terms of the number of nodes. It achieves balance by maintaining certain properties,
each of which must hold true for the tree to be a valid Red-Black Tree.

Key Properties of a Red-Black Tree:

1. Node Coloring:

o Each node in the tree is either red or black.

2. Root Property:

o The root node is always black.

3. Red Property (No Two Consecutive Reds):

o A red node cannot have a red parent or a red child (no two consecutive red nodes in
any path).

4. Black Height Property:

o For every node, every path from that node to its descendant null pointers (NIL
leaves) must have the same number of black nodes.

5. Leaf Property:

o All null leaf nodes (NIL pointers) are considered black.

[OR]

Diagram of a Red-Black Tree

Here’s an example of a valid Red-Black Tree:

[10B]

/ \

[7R] [15B]

/ \ \

[5B] [8B] [20R]

Explanation of the diagram:

 The root node 10 is black, satisfying the root property.

 There are no two consecutive red nodes (e.g., 15B → 20R is valid).
 Every path from the root to any leaf (e.g., 10 → 7 → 5 or 10 → 15 → 20) contains the same
number of black nodes (2 black nodes in this case).

Advantages of Red-Black Trees

1. Self-Balancing:

o Red-Black trees maintain a balance, ensuring the height is logarithmic. This


guarantees O(log⁡n)O(\log n)O(logn) time complexity for operations like search,
insert, and delete.

2. Efficient Updates:

o Insertions and deletions are efficient, requiring at most a few rotations to restore the
Red-Black properties.

3. Memory Efficiency:

o Unlike AVL trees, Red-Black trees do not require storing the height of each node,
making them slightly more memory-efficient.

4. Good Performance for Dynamic Sets:

o Red-Black trees are particularly effective for applications where the dataset changes
frequently (e.g., insertions and deletions).

5. Use in Real-World Systems:

o Red-Black trees are widely used in computer science, such as in the implementation
of the map and set data structures in the C++ STL, Java's TreeMap, and Linux kernel.

10.
B-Tree Construction (Order 4)

A B-tree of order 4 means:

1. Each node can have at most 3 keys (order - 1).

2. Each node can have at most 4 children (equal to the order).

3. The tree is balanced: all leaves are at the same level.

4. Keys in each node are sorted, and each child subtree contains values in the corresponding range.

Step 1: Insert the elements

We insert the elements in the given order:


1, 4, 7, 10, 17, 21, 31, 25, 19, 20, 28, 42

Insertion Steps:
Insert 1, 4, 7, 10:

1. Start with an empty tree.

2. Add 1, 4, 7, and 10 into the root node (no splits required yet).

Tree: [1, 4, 7, 10]

Insert 17:

1. Add 17 to the root node. Now the root contains [1, 4, 7, 10, 17].

2. The root exceeds the maximum of 3 keys, so we split:

o Median key 7 moves up to create a new root.

o Two new child nodes are formed:

 Left child: [1, 4]

 Right child: [10, 17]

Tree:

[7]

/ \

[1, 4] [10, 17]

Insert 21:

1. Add 21 to the right child [10, 17]. 2.No splits are required.

Tree:

[7]

/ \

[1, 4] [10, 17, 21]

Insert 31:

1. Add 31 to the right child [10, 17, 21].

2. The right child exceeds the maximum of 3 keys, so we split:

o Median key 17 moves up to the root.

o Two new child nodes are formed:

 Left child of the split: [10]

 Right child of the split: [21, 31]

Tree:

[7, 17]
/ | \

[1, 4] [10] [21, 31]

Insert 25:

1. Add 25 to the rightmost child [21, 31].

Tree:

[7, 17]

/ | \

[1, 4] [10] [21, 25, 31]

Insert 19:

1. Add 19 to the middle child [10]. 2.The middle child becomes [10, 19].

Tree:

[7, 17]

/ | \

[1, 4] [10, 19] [21, 25, 31]

Insert 20:

1. Add 20 to the middle child [10, 19].

2. The middle child exceeds the maximum of 3 keys, so we split:

o Median key 19 moves up to the root.

o Two new child nodes are formed:

 Left child of the split: [10]

 Right child of the split: [20]

Tree:

[7, 17, 19]

/ | | \

[1, 4] [10] [20] [21, 25, 31]

Insert 28:

1. Add 28 to the rightmost child [21, 25, 31].

2. The rightmost child exceeds the maximum of 3 keys, so we split:

o Median key 25 moves up to the root.


o Two new child nodes are formed:

 Left child of the split: [21]

 Right child of the split: [28, 31]

Tree:

[7, 17, 19, 25]

/ | | | \

[1, 4] [10] [20] [21] [28, 31]

Insert 42:

1. Add 42 to the rightmost child [28, 31].

Final B-Tree After Insertion:

[7, 17, 19, 25]

/ | | | \

[1, 4] [10] [20] [21] [28, 31, 42]

Step 2: Deletions (28, 31, 21, 25, 19)

Delete 28:

1. Remove 28 from the rightmost child [28, 31, 42].

[7, 17, 19, 25]

/ | | | \

[1, 4] [10] [20] [21] [31, 42]

Delete 31:

1. Remove 31 from the rightmost child [31, 42].

[7, 17, 19, 25]

/ | | | \

[1, 4] [10] [20] [21] [42]

Delete 21:

1. Remove 21 from the fourth child [21].

[7, 17, 19, 25]

/ | | | \
[1, 4] [10] [20] [] [42]

2. Merge the empty child [] with its sibling [20]:

[7, 17, 19, 25]

/ | | \

[1, 4] [10] [20, 42]

Delete 25:

1. Remove 25 from the root node [7, 17, 19, 25].

[7, 17, 19]

/ | | \

[1, 4] [10] [20, 42]

Delete 19:

1. Remove 19 from the root node [7, 17, 19].

[7, 17]

/ | \

[1, 4] [10] [20, 42]

Final B-Tree After Deletions:

[7, 17]

/ | \

[1, 4] [10] [20, 42]

You might also like