0% found this document useful (0 votes)
5 views14 pages

ADS MidSolution Feb25

The document discusses various data structures and their properties, focusing on dynamic arrays, priority queues, hash tables, and trees. It highlights the advantages of dynamic arrays over static arrays, the time complexity of operations in binary heaps, and compares array-based and linked list-based queues. Additionally, it explains hashing, skip lists, binary search trees, B-trees, and their respective operations, including insertion and deletion processes.

Uploaded by

nagavaishnavi51
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views14 pages

ADS MidSolution Feb25

The document discusses various data structures and their properties, focusing on dynamic arrays, priority queues, hash tables, and trees. It highlights the advantages of dynamic arrays over static arrays, the time complexity of operations in binary heaps, and compares array-based and linked list-based queues. Additionally, it explains hashing, skip lists, binary search trees, B-trees, and their respective operations, including insertion and deletion processes.

Uploaded by

nagavaishnavi51
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

1.

a) Illustrate how dynamic arrays improve memory efficiency compared to traditional


arrays. (8)

 Static Arrays: Fixed size, declared at creation, memory allocated contiguously.

 Dynamic Arrays: Resize automatically, allocate new memory when needed.

How Dynamic Arrays Improve Memory Efficiency

 They can dynamically resize during runtime. They start with an initial capacity, and when that
capacity is reached, a resizing process occurs to accommodate more elements.

 They are flexible and can grow or shrink dynamically. They handle resizing internally,
abstracting the process from the user.

 Amortized Resizing: Grows exponentially (usually doubles), reducing frequent resizing costs.

 Avoids Memory Wastage or Shortage: No need to predefine the size, preventing excessive
unused memory or running out of space.

 Contiguous Memory Allocation: Maintains fast access and traversal efficiency like static
arrays.

 Efficient Data Copying: When resized, existing elements are copied efficiently to a new
memory block.

 Supports Compact Storage: Can store data directly (e.g., in strings), reducing memory
overhead compared to referential storage.

 They allocate memory as needed. The resizing process involves creating a new array with a
different size, allowing for more efficient memory usage.

b) What is the time complexity of inserting an element into a priority queue implemented
using a binary heap, and why? (2)

The time complexity of inserting an element into a priority queue implemented using a
binary heap is O(log n).
This is because insertion involves adding the element at the end to maintain the complete
binary tree property, followed by an up-heap bubbling process to restore the heap-order
property. Since a heap is a complete binary tree with height O(log n), the worst-case scenario
requires at most log n swaps, leading to an O(log n) insertion time.
2.

a) Suppose a dynamic array doubles its size when full. If its initial capacity is 1, how
many total elements can be inserted before the 5th resizing occurs? (2)

The total number of elements inserted before the 5th resizing is: 16

b) Evaluate the pros and cons of array-based vs linked list-based queue


implementations.

Array-Based Queue
Pros:
Space Efficient: Uses contiguous memory, reducing pointer overhead.
Fast Access: O(1) time complexity for indexing elements.
Cache-Friendly: Continuous memory layout improves CPU cache performance.

Cons:
Fixed Size: Static arrays require a predefined size, leading to inefficiencies.
Resizing Overhead: Dynamic arrays incur costly resizing when capacity is exceeded.
Potential Wasted Space: Dynamic arrays may allocate extra unused memory.

Linked List-Based Queue


Pros:
Dynamic Sizing: Grows and shrinks as needed, ideal for varying queue sizes.
Efficient Insertions/Deletions: O(1) time complexity for enqueue and dequeue.

Cons:
Memory Overhead: Requires extra memory for pointers
Slower Access: No direct indexing; accessing elements takes O(n) time.
Higher Complexity: More difficult to implement and manage.

Conclusion
Use an array-based queue for fast access, space efficiency, and fixed-size scenarios.
Use a linked list-based queue for dynamic sizing and frequent insertions/deletions.
3.

a) State two differences between rehashing and extendible hashing. (2)

Rehashing and extendible hashing are both dynamic methods used to manage data in hash
tables, but they differ in the following ways:
Resizing Method:
• Rehashing: Resizes the entire hash table when the load factor is too high.
• Extendible Hashing: Dynamically adjusts bucket sizes instead of resizing the whole table.

Structure & Efficiency:


• Rehashing: Uses a new hash function to redistribute all keys, which is costly.
• Extendible Hashing: Uses directories and buckets, allowing efficient lookups with just two
disk accesses

b) Define Collison. How is the problem of primary clustering avoided in the quadratic probing
collision resolving technique? Explain with an example. (8)

 Collision:
When the hash function generates the same index for multiple keys, there will be a conflict
(what value to be stored in that index). This is called a hash collision.

 Primary clustering happens when linear probing places colliding elements in consecutive
slots, forming large clusters.
 Quadratic probing avoids this by using a non-linear probing sequence:

Instead of checking the next available slot (as in linear probing), it jumps using a quadratic
function (1², 2², 3², ...).
 Example –
4.

a) What property of hash functions minimizes collisions? (2)

Property of Hash Functions That Minimizes Collisions:


 Uniform Distribution – Spreads keys evenly across the hash table.
 Minimizes Clustering – Avoids keys grouping in specific areas.

b) Explain how hashing and skip lists are used for efficient data storage and retrieval.
Compare their working principles, advantages and use cases. (8)

Hashing:
Working Principle:
 Hashing uses a hash function to map keys to unique hash values, which are then stored in a
hash table.
 Direct access to data using the computed hash value ensures constant time complexity (O(1))
for searches, insertions, and deletions, on average.

Advantages:
 Fast lookups: Provides constant time search.
 Simple structure: Easy to implement with fixed size tables.
 Efficient storage: Stores data based on the computed hash value.

Use Cases:
 Ideal for quick lookups, such as in databases (e.g., indexing), caching systems, and hash
maps.
 Used where data doesn’t need to be sorted, and quick insertion/removal is required.

Skip Lists:
Working Principle:
 A skip list is a sorted linked list with multiple levels, each level skipping over several elements
to reduce search time.
 The search time complexity is O(log n) because each level skips over some elements,
allowing faster searches.

Advantages:
 Sorted order: Data remains sorted, making range queries efficient.
 Efficient insertion/deletion: Allows logarithmic time complexity for insertion and deletion.
 Simple structure: Easier to implement than balanced trees.

Use Cases:
 Ideal for sorted data where range queries or sorted traversal is required, such as in priority
queues or sorted data storage.
 Used in databases for efficient searching and maintaining order.
5.
a) Construct a Red-black tree by inserting the following sequence of values:
10,20,15,3,2,16,21,25,30,40. Explain the step-by-step insertion process for each
element. (8)

c) Name the operations that cause splaying in a splay tree. (2)

The operations that cause splaying in a splay tree are:


Search (Find) – Splays the accessed node to the root.
Insertion – Inserts the node and then splays it to the root.
Deletion – Replaces the deleted node and then splays its parent to the root.
Each operation ensures recently accessed nodes stay near the root for faster future
access
6.

a) Define a binary search tree (BST) and mention its key properties.

Definition of a Binary Search Tree (BST):

A Binary Search Tree (BST) is a hierarchical data structure in which each node has at most two
children, and the left subtree contains values smaller than the parent node, while the right subtree
contains values greater than the parent node.

Key Properties of a BST:

1. Binary Structure – Each node has at most two children: left and right.

2. Ordering Property – Left child < Parent < Right child.

3. In-order Traversal – Produces a sorted sequence of elements.

4. Efficient Search, Insertion, and Deletion – Average time complexity: O(log n) (worst case:
O(n) in a skewed tree).

b) Define B-Tree and its properties. Explain the deletion operation in a B-Tree step by step and
illustrate the process with an example.

A B-Tree is a self-balancing search tree used for efficiently storing and retrieving large amounts of
data, often in databases and file systems.

Properties of B-Tree:

 All the leaf nodes in a B-tree are at the same level.


 All internal nodes must have M/2 children.
 If the root node is a non leaf node, then it must have at least two children.
 All nodes except the root node, must have at least ⌈M/2⌉-1 keys and at most M-1 keys.

Deletion operation in a B-tree:

Deletion of a key in a B tree includes two cases, these are:

 Deletion of key from a leaf node


 Deletion of a key from an Internal node
Case 1 − If the key to be deleted is in a leaf node and the deletion does not violate the
minimum key property, just delete the node.

Case 2 − If the key to be deleted is in a leaf node but the deletion violates the minimum key property,
borrow a key from either its left sibling or right sibling. In case if both siblings have exact minimum
number of keys, merge the node in either of them.
Case 3 − If the key to be deleted is in an internal node, it is replaced by a key in either left child or
right child based on which child has more keys. But if both child nodes have minimum number of
keys, they’re merged together.

Case 4 − If the key to be deleted is in an internal node violating the minimum keys property, and both
its children and sibling have minimum number of keys, merge the children. Then merge its sibling
with its parent.

You might also like