0% found this document useful (0 votes)
19 views31 pages

Data Structures and Algorithms - Unit 3 & 4 Solutions

The document provides an overview of data structures and algorithms, focusing on linked lists, graphs, and queues. It includes definitions, comparisons, and implementations in Python for various data structures such as linked lists, adjacency lists, adjacency matrices, and different types of queues. Additionally, it covers traversal methods like BFS and DFS, as well as memory allocation in linked lists and comparisons between singly, doubly, and circular linked lists.

Uploaded by

gchaitanyasree4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views31 pages

Data Structures and Algorithms - Unit 3 & 4 Solutions

The document provides an overview of data structures and algorithms, focusing on linked lists, graphs, and queues. It includes definitions, comparisons, and implementations in Python for various data structures such as linked lists, adjacency lists, adjacency matrices, and different types of queues. Additionally, it covers traversal methods like BFS and DFS, as well as memory allocation in linked lists and comparisons between singly, doubly, and circular linked lists.

Uploaded by

gchaitanyasree4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Data Structures and Algorithms - Unit 3 & 4 Solutions

1. What is linked list? How does it differ from an array?


A linked list is a linear data structure consisting of a sequence of elements called nodes, where each node
contains:

1. Data (the value stored)


2. A reference (link) to the next node in the sequence

Key differences between linked lists and arrays:


Characteristic Linked List Array

Memory
Dynamic and scattered (non-contiguous) Static and contiguous
allocation

Can grow or shrink dynamically during Fixed size in many languages (except dynamic
Size flexibility
execution arrays)

Element access O(n) time complexity (sequential access) O(1) time complexity (random access)

O(n) time complexity (elements may need


Insertion/deletion O(1) time complexity (once position is found)
shifting)

Memory overhead Extra memory for storing references/pointers No overhead for references

Better when size is unknown or frequently


Memory efficiency Better when size is known and fixed
changes

Cache locality Poor (elements scattered in memory) Excellent (elements adjacent in memory)
 

2. Write a Python program to represent a graph using an adjacency list and


adjacency matrix.

Adjacency List Representation


python

class Graph:
def __init__(self, vertices):
self.vertices = vertices
self.graph = [[] for _ in range(vertices)]

def add_edge(self, u, v):


# Add edge from u to v
self.graph[u].append(v)
# For undirected graph, add edge from v to u as well
self.graph[v].append(u)

def print_adjacency_list(self):
for i in range(self.vertices):
print(f"Adjacency list of vertex {i}:", end=" ")
for neighbor in self.graph[i]:
print(f"{neighbor}", end=" ")
print()

Adjacency Matrix Representation

python

class GraphMatrix:
def __init__(self, vertices):
self.vertices = vertices
# Initialize adjacency matrix with zeros
self.graph = [[0 for _ in range(vertices)] for _ in range(vertices)]

def add_edge(self, u, v):


# Set edge from u to v
self.graph[u][v] = 1
# For undirected graph, set edge from v to u as well
self.graph[v][u] = 1

def print_adjacency_matrix(self):
print("Adjacency Matrix:")
for i in range(self.vertices):
for j in range(self.vertices):
print(f"{self.graph[i][j]}", end=" ")
print()

Example Usage
python

# Example for adjacency list


g = Graph(5) # Create a graph with 5 vertices
g.add_edge(0, 1)
g.add_edge(0, 4)
g.add_edge(1, 2)
g.add_edge(1, 3)
g.add_edge(1, 4)
g.add_edge(2, 3)
g.add_edge(3, 4)
g.print_adjacency_list()

# Example for adjacency matrix


g_matrix = GraphMatrix(5)
g_matrix.add_edge(0, 1)
g_matrix.add_edge(0, 4)
g_matrix.add_edge(1, 2)
g_matrix.add_edge(1, 3)
g_matrix.add_edge(1, 4)
g_matrix.add_edge(2, 3)
g_matrix.add_edge(3, 4)
g_matrix.print_adjacency_matrix()

3. Explain types of queues (Simple Queue, Circular Queue, Priority Queue).

Simple Queue (FIFO Queue)


A simple queue follows the First-In-First-Out (FIFO) principle

Elements are inserted at the rear and removed from the front

Basic operations:
Enqueue: Add element to the rear

Dequeue: Remove element from the front

Applications:
Print job scheduling

Task scheduling in operating systems

Resource sharing among processes

Circular Queue
A circular queue is a variation of a simple queue where the last position is connected to the first

It overcomes the problem of unutilized space in a simple queue implementation using arrays

When the queue is full and elements are dequeued, the freed positions can be reused
Improves memory utilization
Applications:
Traffic signal control
Memory management

CPU scheduling

Priority Queue
In a priority queue, elements are dequeued based on their priority rather than the order of insertion
Higher priority elements are processed before lower priority ones

Can be implemented using various data structures like arrays, linked lists, or heaps (binary heaps
provide efficient operations)

Basic operations:
Insert with priority

Delete highest/lowest priority element

Applications:
Dijkstra's algorithm

Huffman coding

Process scheduling in operating systems

Event-driven simulation

4. Implement BFS traversal on a graph using Python. Print the order in which
nodes are visited.
python
from collections import defaultdict, deque

class Graph:
def __init__(self):
# Default dictionary to store graph
self.graph = defaultdict(list)

# Function to add an edge to graph


def add_edge(self, u, v):
self.graph[u].append(v)

# Function to perform BFS traversal


def bfs(self, start_node):
# Create a set for visited nodes
visited = set()

# Create a queue for BFS


queue = deque()

# Add the start node to queue and mark it as visited


queue.append(start_node)
visited.add(start_node)

# List to store traversal order


traversal_order = []

while queue:
# Dequeue a vertex from queue
current_node = queue.popleft()
traversal_order.append(current_node)

# Get all adjacent vertices of the dequeued vertex


# If an adjacent vertex has not been visited, mark it
# visited and enqueue it
for neighbor in self.graph[current_node]:
if neighbor not in visited:
queue.append(neighbor)
visited.add(neighbor)

return traversal_order

# Example usage
g = Graph()
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 2)
g.add_edge(2, 0)
g.add_edge(2, 3)
g.add_edge(3, 3)

print("BFS Traversal starting from vertex 2:")


traversal = g.bfs(2)
print(" -> ".join(map(str, traversal)))

5. Write a python program to insert a node at the beginning of a singly linked


list.
python

class Node:
def __init__(self, data):
self.data = data
self.next = None

class SinglyLinkedList:
def __init__(self):
self.head = None

def insert_at_beginning(self, data):


# Create a new node with the given data
new_node = Node(data)

# Make the new node point to the current head


new_node.next = self.head

# Update the head to point to the new node


self.head = new_node

def print_list(self):
if self.head is None:
print("Linked list is empty")
return

current = self.head
while current:
print(current.data, end=" -> ")
current = current.next
print("None")

# Example usage
# Initialize an empty linked list
linked_list = SinglyLinkedList()

# Insert nodes at the beginning


linked_list.insert_at_beginning(30)
linked_list.insert_at_beginning(20)
linked_list.insert_at_beginning(10)

# Print the linked list


print("Linked list after insertions:")
linked_list.print_list()
6. Implement DFS traversal on a graph using Python. Show both recursive and
iterative implementations. Detect Cycle in an Undirected Graph.
python
from collections import defaultdict

class Graph:
def __init__(self):
self.graph = defaultdict(list)

def add_edge(self, u, v):


self.graph[u].append(v)
self.graph[v].append(u) # For undirected graph

# Recursive DFS
def dfs_recursive(self, start_node):
# Track visited nodes
visited = set()
# List to store traversal order
traversal_order = []

def dfs_util(node):
# Mark current node as visited
visited.add(node)
traversal_order.append(node)

# Recur for all adjacent vertices


for neighbor in self.graph[node]:
if neighbor not in visited:
dfs_util(neighbor)

# Call the recursive helper function


dfs_util(start_node)
return traversal_order

# Iterative DFS
def dfs_iterative(self, start_node):
# Track visited nodes
visited = set()
# Stack for DFS
stack = [start_node]
# List to store traversal order
traversal_order = []

while stack:
# Pop a vertex from stack
current = stack.pop()

# If current node is not visited yet


if current not in visited:
visited.add(current)
traversal_order.append(current)

# Get all adjacent vertices of the popped vertex


# Push them to stack in reverse order so that DFS visits
# them in the same order as recursive DFS
for neighbor in reversed(self.graph[current]):
if neighbor not in visited:
stack.append(neighbor)

return traversal_order

# Function to detect cycle in undirected graph using DFS


def has_cycle(self):
# Track visited nodes
visited = set()

def dfs_cycle_util(node, parent):


# Mark current node as visited
visited.add(node)

# Recur for all adjacent vertices


for neighbor in self.graph[node]:
# If neighbor is not visited yet, then recur for it
if neighbor not in visited:
if dfs_cycle_util(neighbor, node):
return True
# If an adjacent vertex is visited and not the parent of current vertex,
# then there is a cycle
elif neighbor != parent:
return True

return False

# Check for cycle in different components of graph


for node in self.graph:
if node not in visited:
if dfs_cycle_util(node, -1):
return True

return False

7. How is memory dynamically allocated in a linked list?


Memory allocation in linked lists is dynamic and occurs as follows:

1. Node Creation:
When a new node is created, memory is allocated for it dynamically at runtime
In low-level languages like C, functions like malloc() or calloc() are used to allocate memory
from the heap
In higher-level languages like Python or Java, memory allocation is handled by the garbage
collector

2. Non-contiguous Allocation:
Unlike arrays where memory is allocated in one contiguous block, linked list nodes can be
scattered throughout memory
Each node allocation is independent of others, allowing for efficient memory usage

3. Dynamic Growth and Shrinking:


The list can grow as needed by allocating memory for new nodes
Memory can be freed when nodes are removed, allowing the list to shrink

4. Memory Structure:
Each node typically contains:
Data section: Stores the actual value
Reference section: Stores the address of the next node (and previous node in doubly linked
lists)

5. Efficient Memory Utilization:


Only the required amount of memory is allocated

No pre-allocation is needed, which is efficient when the size is unknown


No memory wastage like in arrays when allocated space is larger than needed

6. Memory Management:
In languages without garbage collection, memory must be explicitly freed when nodes are
deleted

Memory leaks can occur if nodes are not properly deallocated

This dynamic memory allocation is what gives linked lists their flexibility compared to static data
structures like arrays.

8. For an unweighted graph, find the shortest path between two nodes using
BFS. Binary Tree Construction and Traversals.
python
from collections import defaultdict, deque

# Part 1: Shortest Path using BFS


class Graph:
def __init__(self):
self.graph = defaultdict(list)

def add_edge(self, u, v):


self.graph[u].append(v)
self.graph[v].append(u) # For undirected graph

def shortest_path(self, start, end):


# Base case: if start and end are the same
if start == end:
return [start]

# Track visited nodes and their parents


visited = {start}
queue = deque([(start, [start])]) # (node, path_so_far)

while queue:
current, path = queue.popleft()

# Check all neighbors


for neighbor in self.graph[current]:
if neighbor == end:
# Found the target node, return the path
return path + [neighbor]

if neighbor not in visited:


visited.add(neighbor)
queue.append((neighbor, path + [neighbor]))

# No path found
return None

# Part 2: Binary Tree Construction and Traversals


class TreeNode:
def __init__(self, key):
self.key = key
self.left = None
self.right = None

class BinaryTree:
def __init__(self):
self.root = None
def insert(self, key):
if not self.root:
self.root = TreeNode(key)
return

queue = deque([self.root])
while queue:
node = queue.popleft()

if not node.left:
node.left = TreeNode(key)
return
else:
queue.append(node.left)

if not node.right:
node.right = TreeNode(key)
return
else:
queue.append(node.right)

# Recursive traversals
def inorder_recursive(self, node, result=None):
if result is None:
result = []

if node:
self.inorder_recursive(node.left, result)
result.append(node.key)
self.inorder_recursive(node.right, result)

return result

def preorder_recursive(self, node, result=None):


if result is None:
result = []

if node:
result.append(node.key)
self.preorder_recursive(node.left, result)
self.preorder_recursive(node.right, result)

return result

def postorder_recursive(self, node, result=None):


if result is None:
result = []

if node:
self.postorder_recursive(node.left, result)
self.postorder_recursive(node.right, result)
result.append(node.key)

return result

9. Compare singly, doubly and circular linked lists with examples.

Singly Linked List


A singly linked list is a linear data structure where each node points to the next node in the sequence.

Structure:

Node: [Data | Next]


List: [Head] -> [Data1 | Next] -> [Data2 | Next] -> ... -> [DataN | NULL]

Characteristics:

Each node contains data and a reference to the next node


Traversal is possible in one direction only (forward)

Takes less memory (one pointer per node)

Less complex implementation

Example operations:

python

# Insert at beginning
def insert_beginning(self, data):
new_node = Node(data)
new_node.next = self.head
self.head = new_node

Doubly Linked List


A doubly linked list is a linked list where each node has two references: one to the next node and one to
the previous node.

Structure:
Node: [Prev | Data | Next]
List: [Head] <-> [Prev | Data1 | Next] <-> [Prev | Data2 | Next] <-> ... <-> [Prev | DataN |
NULL]

Characteristics:

Each node contains data, a reference to the next node, and a reference to the previous node

Traversal is possible in both directions (forward and backward)


Takes more memory (two pointers per node)

More complex implementation but allows more operations

Example operations:

python

# Insert after a specific node


def insert_after(self, prev_node, data):
if prev_node is None:
return

new_node = Node(data)
new_node.next = prev_node.next
new_node.prev = prev_node

if prev_node.next:
prev_node.next.prev = new_node

prev_node.next = new_node

Circular Linked List


A circular linked list can be either singly or doubly linked, but the last node points back to the first node
instead of NULL.

Structure (Singly Circular):

Node: [Data | Next]


List: [Head] -> [Data1 | Next] -> [Data2 | Next] -> ... -> [DataN | Next] -↑
↑----------------------------------------------------------------|

Structure (Doubly Circular):


Node: [Prev | Data | Next]
List: [Head] <-> [Prev | Data1 | Next] <-> ... <-> [Prev | DataN | Next]
↑----------------------------------------------------↓

Characteristics:

The last node points back to the first node (no NULL pointers)
Allows continuous traversal around the list
Efficient for operations that need to cycle through all elements

Useful in applications like round-robin scheduling

Example operations:

python

# Insert at the end of circular linked list


def insert_end(self, data):
new_node = Node(data)

if self.head is None:
self.head = new_node
new_node.next = self.head
return

current = self.head
while current.next != self.head:
current = current.next

current.next = new_node
new_node.next = self.head

Comparison
Feature Singly Linked List Doubly Linked List Circular Linked List

More memory (two Depends on implementation


Memory usage Less memory (one pointer)
pointers) (singly or doubly)

Traversal Forward only Both forward and backward Continuous (no end)

Insertion at O(1) for singly circular with tail


O(1) O(1)
beginning pointer, O(n) without

Deletion of a Requires previous node


O(1) with direct reference Depends on implementation
node (O(n) if not known)

Not possible without Requires full traversal in singly


Reverse traversal Directly supported
modification circular

Simple applications with Applications requiring Round-robin scheduling, circular


Applications
memory constraints bidirectional traversal buffers
 

10. Implement a BST with insertion, deletion, and search operations.


python
class Node:
def __init__(self, key):
self.key = key
self.left = None
self.right = None

class BinarySearchTree:
def __init__(self):
self.root = None

# Insert operation
def insert(self, key):
self.root = self._insert_recursive(self.root, key)

def _insert_recursive(self, root, key):


# If the tree is empty, create a new node
if root is None:
return Node(key)

# Otherwise, recur down the tree


if key < root.key:
root.left = self._insert_recursive(root.left, key)
elif key > root.key:
root.right = self._insert_recursive(root.right, key)

# Return the unchanged node pointer


return root

# Search operation
def search(self, key):
return self._search_recursive(self.root, key)

def _search_recursive(self, root, key):


# Base case: root is None or key is at the root
if root is None or root.key == key:
return root

# Key is greater than root's key


if key > root.key:
return self._search_recursive(root.right, key)

# Key is smaller than root's key


return self._search_recursive(root.left, key)

# Inorder traversal
def inorder(self):
result = []
self._inorder_recursive(self.root, result)
return result

def _inorder_recursive(self, root, result):


if root:
self._inorder_recursive(root.left, result)
result.append(root.key)
self._inorder_recursive(root.right, result)

# Find minimum value node


def _min_value_node(self, node):
current = node

# Find the leftmost leaf


while current.left is not None:
current = current.left

return current

# Delete operation
def delete(self, key):
self.root = self._delete_recursive(self.root, key)

def _delete_recursive(self, root, key):


# Base case
if root is None:
return root

# If the key to be deleted is smaller than the root's key,


# then it lies in left subtree
if key < root.key:
root.left = self._delete_recursive(root.left, key)

# If the key to be deleted is greater than the root's key,


# then it lies in right subtree
elif key > root.key:
root.right = self._delete_recursive(root.right, key)

# If key is same as root's key, then this is the node to be deleted


else:
# Node with only one child or no child
if root.left is None:
return root.right
elif root.right is None:
return root.left
# Node with two children: Get the inorder successor (smallest
# in the right subtree)
temp = self._min_value_node(root.right)

# Copy the inorder successor's content to this node


root.key = temp.key

# Delete the inorder successor


root.right = self._delete_recursive(root.right, temp.key)

return root

11. What is a stack? Explain its operations (push, pop, peek) with examples.
A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle, meaning that the last
element inserted is the first one to be removed.

Basic Stack Operations


1. Push: Adds an element to the top of the stack

Initial stack: [10, 20]


After push(30): [10, 20, 30]

2. Pop: Removes and returns the topmost element from the stack

Initial stack: [10, 20, 30]


Operation: pop()
Result: 30
Stack after operation: [10, 20]

3. Peek/Top: Returns the topmost element without removing it

Stack: [10, 20, 30]


Operation: peek()
Result: 30
Stack after operation: [10, 20, 30] (unchanged)

Additional Stack Operations


4. isEmpty: Checks if the stack is empty
Stack: []
Operation: isEmpty()
Result: True

Stack: [10, 20]


Operation: isEmpty()
Result: False

5. Size: Returns the number of elements in the stack

Stack: [10, 20, 30]


Operation: size()
Result: 3

Real-world Applications of Stacks


1. Function Call Stack: Programming languages use a stack to keep track of function calls and returns
2. Expression Evaluation: Used to evaluate mathematical expressions (e.g., infix to postfix conversion)

3. Undo Functionality: Applications like text editors use stacks to implement undo features

4. Backtracking Algorithms: Used in maze problems, puzzle solutions, etc.

5. Syntax Parsing: Compilers use stacks for parsing programming language syntax
6. Browser History: The back button in web browsers uses a stack to track visited pages

Implementation Approaches
Stacks can be implemented using:

1. Arrays (fixed size, but simple)


2. Dynamic arrays (resizable)

3. Linked lists (dynamic size)

Each approach has different trade-offs in terms of simplicity, memory usage, and performance.

12. Create an AVL tree in Python and implement binary tree traversals.
python
class TreeNode:
def __init__(self, key):
self.key = key
self.left = None
self.right = None
self.height = 1 # Height of the node (for AVL)

class AVLTree:
# Get height of a node
def height(self, node):
if not node:
return 0
return node.height

# Get balance factor of a node


def get_balance(self, node):
if not node:
return 0
return self.height(node.left) - self.height(node.right)

# Right rotation
def right_rotate(self, y):
x = y.left
T2 = x.right

# Perform rotation
x.right = y
y.left = T2

# Update heights
y.height = max(self.height(y.left), self.height(y.right)) + 1
x.height = max(self.height(x.left), self.height(x.right)) + 1

# Return new root


return x

# Left rotation
def left_rotate(self, x):
y = x.right
T2 = y.left

# Perform rotation
y.left = x
x.right = T2

# Update heights
x.height = max(self.height(x.left), self.height(x.right)) + 1
y.height = max(self.height(y.left), self.height(y.right)) + 1

# Return new root


return y

# Insert a node
def insert(self, root, key):
# Perform normal BST insertion
if not root:
return TreeNode(key)

if key < root.key:


root.left = self.insert(root.left, key)
elif key > root.key:
root.right = self.insert(root.right, key)
else: # Equal keys not allowed
return root

# Update height of this ancestor node


root.height = max(self.height(root.left), self.height(root.right)) + 1

# Get the balance factor to check if node became unbalanced


balance = self.get_balance(root)

# Left Left Case


if balance > 1 and key < root.left.key:
return self.right_rotate(root)

# Right Right Case


if balance < -1 and key > root.right.key:
return self.left_rotate(root)

# Left Right Case


if balance > 1 and key > root.left.key:
root.left = self.left_rotate(root.left)
return self.right_rotate(root)

# Right Left Case


if balance < -1 and key < root.right.key:
root.right = self.right_rotate(root.right)
return self.left_rotate(root)

# Return the unchanged node


return root

# Binary Tree Traversal Implementations


class BinaryTreeTraversal:
@staticmethod
def inorder_recursive(root, result=None):
if result is None:
result = []
if root:
BinaryTreeTraversal.inorder_recursive(root.left, result)
result.append(root.key)
BinaryTreeTraversal.inorder_recursive(root.right, result)
return result

@staticmethod
def preorder_recursive(root, result=None):
if result is None:
result = []
if root:
result.append(root.key)
BinaryTreeTraversal.preorder_recursive(root.left, result)
BinaryTreeTraversal.preorder_recursive(root.right, result)
return result

@staticmethod
def postorder_recursive(root, result=None):
if result is None:
result = []
if root:
BinaryTreeTraversal.postorder_recursive(root.left, result)
BinaryTreeTraversal.postorder_recursive(root.right, result)
result.append(root.key)
return result

@staticmethod
def inorder_iterative(root):
result = []
stack = []
current = root

while current or stack:


# Reach the leftmost node
while current:
stack.append(current)
current = current.left

# Current is now None, pop an element from stack


current = stack.pop()
result.append(current.key)
# Visit the right subtree
current = current.right

return result

@staticmethod
def preorder_iterative(root):
if not root:
return []

result = []
stack = [root]

while stack:
current = stack.pop()
result.append(current.key)

# Push right child first so that left child is processed first


if current.right:
stack.append(current.right)
if current.left:
stack.append(current.left)

return result

@staticmethod
def postorder_iterative(root):
if not root:
return []

result = []
stack1 = [root]
stack2 = []

# First, push all nodes in postorder using two stacks


while stack1:
current = stack1.pop()
stack2.append(current)

if current.left:
stack1.append(current.left)
if current.right:
stack1.append(current.right)

# Then, pop from the second stack to get postorder


while stack2:
result.append(stack2.pop().key)
return result

@staticmethod
def level_order_traversal(root):
if not root:
return []

result = []
queue = [root]

while queue:
current = queue.pop(0)
result.append(current.key)

if current.left:
queue.append(current.left)
if current.right:
queue.append(current.right)

return

You might also like