Important QN
Important QN
3) Construct the AVL tree with the following numbers (by mentioning the rotations used)
12, 15, 36, 17, 34, 85, 64, 19, 3 and find the balance factor of the nodes: 64 and 34.
Constructing the AVL Tree
AVL Tree Properties:
Self-balancing binary search tree.
The height difference between the left and right subtrees of any node is at most one.
This balance is maintained using rotations (left and right rotations).
Insertion and Rotations:
1. Insert 12:
o Root node.
2. Insert 15:
o Right child of 12.
3. Insert 36:
o Right child of 15.
4. Insert 17:
o Left child of 36.
5. Insert 34:
o Left child of 36.
o Right rotation at 15 to balance the subtree.
6. 12
7. / \
15 36 /
17 34
6. **Insert 85:**
* Right child of 36.
7. **Insert 64:**
* Left child of 85.
8. **Insert 19:**
* Left child of 17.
9. **Insert 3:**
* Left child of 12.
17
/ \
12 36
/\ /\
3 15 34 85 /
19 64
**Balance Factors:**
**Rotations Used:**
* One right rotation was used during the insertion of 34 to maintain the balance of the AVL
tree.
4) Find the DFS traversal for the graph given below start node at 1. show the stack trace at
each step of the DFS traversal provide the DFS tree.
DFS Traversal for the Given Graph
DFS (Depth-First Search):
A graph traversal algorithm that explores as deeply as possible along each branch
before backtracking.
Given Graph:
DFS Traversal:
1. Stack: [1]
o Visit: 1
2. Stack: [1, 2]
o Visit: 2
3. Stack: [1, 2, 3]
o Visit: 3
4. Stack: [1, 2, 3, 4]
o Visit: 4
5. Stack: [1, 2, 3, 4, 5]
o Visit: 5
6. Stack: [1, 2, 3, 4, 5, 6]
o Visit: 6
7. Stack: [1, 2, 3, 4, 5, 6, 7]
o Visit: 7
8. Stack: [1, 2, 3, 4, 5, 6]
o Backtrack: 7
9. Stack: [1, 2, 3, 4, 5]
o Backtrack: 6
10. Stack: [1, 2, 3, 4]
Backtrack: 5
11. Stack: [1, 2, 3]
Backtrack: 4
12. Stack: [1, 2]
Backtrack: 3
13. Stack: [1]
Backtrack: 2
14. Stack: Empty
DFS traversal complete.
DFS Tree:
DFS Traversal Sequence: 1, 2, 3, 4, 5, 6, 7
5) Given an undirected graph with V vertices labeled from 0 to V-1 and E edges, check
whether it contains any cycle or not. Graph is in the form of adjacency list where adj[i]
def is_cyclic(graph):
visited = [False] * len(graph)
for node in range(len(graph)):
if not visited[node]:
if not dfs(graph, visited, node, -1):
return True
return False
Explanation:
1. dfs function:
o Marks the current node as visited.
o Recursively calls dfs for unvisited neighbors.
o If a visited neighbor is encountered and it's not the parent, a cycle is detected.
2. is_cyclic function:
o Iterates through all nodes.
o If an unvisited node is found, starts a DFS traversal from it.
o If a cycle is detected during the traversal, returns True.
Applying to the Given Graph:
The given graph can be represented as an adjacency list:
Python
graph = [
[2, 5],
[1, 3, 5, 8],
[2, 6],
[2],
[1, 2],
[3],
[]
]
DFS Traversal and Cycle Detection:
1. Start DFS from node 1:
o Visit 1
o Visit 2
o Visit 3
o Visit 6
o Backtrack to 3
o Backtrack to 2
o Visit 5
o Backtrack to 2
o Visit 8
o Backtrack to 1
2. Since no cycles were detected during the DFS traversal, the graph is acyclic.
Conclusion: The given graph does not contain any cycles. The DFS traversal effectively
detected this by marking nodes as visited and checking for back edges that would indicate a
cycle.
7) Let G be a graph whose vertices are the integers 1 through 8, and let the adjacent vertices
of each vertex be given by the table below:
Assume that, in a traversal of G, the adjacent vertices of a given vertex are returned in
the same order as they are listed in the above table.
a. Draw G.
To draw the graph \( G \) based on the provided adjacency information for the vertices 1
through 8, we can list the connections:
```
1
/|\
2 | 4 -- 6
\|/ /|\
3 5 7
|
8
```
Character a b c d e f
Occurences 15 7 5 8 30 10
c)Explain which letter you would expect to find closest to the top of the Huffman Tree
when created for the expected word. (0.5M)
To construct the Huffman tree, we start by listing the characters and their frequencies:
| Character | Frequency |
|-----------|-----------|
|a | 15 |
|b |7 |
|c |5 |
|d |8 |
|e | 30 |
|f | 10 |
1. **Initial List**:
- (c, 5), (b, 7), (d, 8), (f, 10), (a, 15), (e, 30)
3. **Combine Again**:
- Combine (d, 8) and (f, 10) → New Node (df, 18)
- List: (cb, 12), (df, 18), (a, 15), (e, 30)
4. **Combine Again**:
- Combine (cb, 12) and (a, 15) → New Node (cba, 27)
- List: (df, 18), (cba, 27), (e, 30)
5. **Combine Again**:
- Combine (df, 18) and (cba, 27) → New Node (dfcba, 45)
- List: (e, 30), (dfcba, 45)
6. **Final Combine**:
- Combine (e, 30) and (dfcba, 45) → Root Node (root, 75)
**Compressed:** `101111000100`
**Compressed:** `101000100`
\[
\text{Percentage of vowels} = \left(\frac{2}{5}\right) \times 100 = 40\%
\]
### e) Explanation for Expected Letter at the Top of the Huffman Tree
In Huffman coding, characters with higher frequencies are assigned shorter codes and are placed
closer to the root of the tree. Therefore, since 'e' has the highest frequency (30 occurrences), we
would expect it to be closest to the top of the Huffman tree.
10) Compress ‘BILL BEATS BEN.’ using the Huffman approach. [ 5 Marks]
To compress the string "BILL BEATS BEN." using the Huffman coding approach, we first
need to determine the frequency of each character in the string.
| Character | Frequency |
|-----------|-----------|
|B |3 |
|I |2 |
|L |2 |
|E |2 |
|A |1 |
|T |1 |
|S |1 |
|N |1 |
|. |1 |
| (space) | 2 |
3. **Combine Again**:
- Combine (S, 1) and (N, 1) → New Node (SN, 2)
- List: (B, 3), (I, 2), (L, 2), (E, 2), (space, 2), (SN, 2), (., 1), (AT, 2)
4. **Combine Again**:
- Combine (., 1) and (AT, 2) → New Node (AT., 3)
- List: (B, 3), (I, 2), (L, 2), (E, 2), (space, 2), (SN, 2), (AT., 3)
5. **Combine Again**:
- Combine (I, 2) and (L, 2) → New Node (IL, 4)
- List: (B, 3), (E, 2), (space, 2), (SN, 2), (AT., 3), (IL, 4)
6. **Combine Again**:
- Combine (E, 2) and (space, 2) → New Node (Espace, 4)
- List: (B, 3), (SN, 2), (AT., 3), (IL, 4), (Espace, 4)
7. **Combine Again**:
- Combine (SN, 2) and (B, 3) → New Node (BSN, 5)
- List: (AT., 3), (IL, 4), (Espace, 4), (BSN, 5)
8. **Combine Again**:
- Combine (AT., 3) and (Espace, 4) → New Node (AT.Espace, 7)
- List: (IL, 4), (BSN, 5), (AT.Espace, 7)
9. **Final Combine**:
- Combine (IL, 4) and (BSN, 5) → New Node (ILBSN, 9)
- Combine (ILBSN, 9) and (AT.Espace, 7) → Root Node
Based on the Huffman tree constructed (not fully detailed here), you would assign binary
codes as follows:
- **B**: 00
- **I**: 010
- **L**: 011
- **E**: 10
- **A**: 1100
- **T**: 1101
- **S**: 11100
- **N**: 11101
- **.**: 11110
- **(space)**: 11111
Now, we can compress the string "BILL BEATS BEN." using the generated Huffman
codes.
**Compressed Representation**:
- B → 00
- I → 010
- L → 011
- L → 011
- (space) → 11111
- B → 00
- E → 10
- A → 1100
- T → 1101
- S → 11100
- (space) → 11111
- B → 00
- E → 10
- N → 11101
- . → 11110
This binary string represents the compressed version of "BILL BEATS BEN." using
Huffman coding.
11)Construct the Binary search tree for values 15, 20, 10, 8, 25, 12, 16.
Write the inorder successor of 12 and 15 from the constructed tree.
To construct a Binary Search Tree (BST) with the values 15, 20, 10, 8, 25, 12, and 16, we
will insert the values in the given order.
1. **Insert 15**:
- Tree:
```
15
```
2. **Insert 20**:
- Tree:
```
15
\
20
```
3. **Insert 10**:
- Tree:
```
15
/ \
10 20
```
4. **Insert 8**:
- Tree:
```
15
/ \
10 20
/
8
```
5. **Insert 25**:
- Tree:
```
15
/ \
10 20
/ \
8 25
```
6. **Insert 12**:
- Tree:
```
15
/ \
10 20
/\ \
8 12 25
```
7. **Insert 16**:
- Tree:
```
15
/ \
10 20
/\ \
8 12 25
/
16
```
**Inorder Successor**:
The inorder successor of a node is the node with the smallest key greater than the key of the
node.
### Summary
- **Inorder Successor of 12**: 15
- **Inorder Successor of 15**: 16 [ 5 Marks]