0% found this document useful (0 votes)
17 views38 pages

ESP-VI Part C

The document covers various computer science topics including max-heap insertion, linked list manipulation, operator precedence, recursive function evaluation, and merge sort comparisons. It also discusses depth-first search (DFS) node discovery, AVL tree insertion, stack and queue simulations, cache memory access times, and Turing machines for palindrome checking and Fibonacci computation. Each section provides detailed explanations and code snippets where applicable.

Uploaded by

priyotoshr501
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views38 pages

ESP-VI Part C

The document covers various computer science topics including max-heap insertion, linked list manipulation, operator precedence, recursive function evaluation, and merge sort comparisons. It also discusses depth-first search (DFS) node discovery, AVL tree insertion, stack and queue simulations, cache memory access times, and Turing machines for palindrome checking and Fibonacci computation. Each section provides detailed explanations and code snippets where applicable.

Uploaded by

priyotoshr501
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Max-Heap Linked List Function

answer in details

Let's go through each of the five questions in the image step by step:

Question 1: Max-Heap Insertion

Initial max-heap (level-order):


`10, 8, 5, 3, 2`

Insertions: `1`, then `7`

Step-by-step Insertion:

1. Insert 1
Heap becomes: `10, 8, 5, 3, 2, 1`
Since 1 < 5 (its parent), no swapping needed.

2. Insert 7
Heap becomes: `10, 8, 5, 3, 2, 1, 7`

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/4
7 > 5 (its parent), swap with 5 → `10, 8, 7, 3, 2, 1, 5`

Final heap (level-order traversal):

`10, 8, 7, 3, 2, 1, 5`

Question 2: C Function for Moving Last Node to Front in a Linked List

Task: Move the last element of a singly linked list to the front.

C Code:

`struct Node {
int data;
struct Node* next;
};

struct Node* moveLastToFront(struct Node* head) {


if (head == NULL || head->next == NULL)
return head;

struct Node* secondLast = NULL;


struct Node* last = head;

while (last->next != NULL) {


secondLast = last;
last = last->next;
}

secondLast->next = NULL;
last->next = head;
head = last;

return head;
}
`

Question 3: Parse Tree with Custom Operator Precedence

Given operators:

`↧` (left-associative, higher precedence)

`∨` (right-associative, lower precedence)

Expression: `7 ↧ 1 ∨ 3 ↧ 4 ∨ 3 ∧ 2`

Let’s simplify the precedence:

1. Highest precedence: `↧` (left-associative)


So `7 ↧ 1` and `3 ↧ 4` evaluated first.
2. Then `∨` and then `∧`.

Assuming the exact parse tree needs evaluation based on those rules, but for clarity without the
operator semantics, only the operator precedence and associativity is considered.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 2/4
Example parse tree:

markdown

` ∨
/ \
↧ ∧
/ \ / \
7 1 3 2
/
4
`

This structure depends on exact operator definitions, but parsing happens in this precedence-based
order.

Question 4: Recursive C Function Evaluation

`unsigned int foo(unsigned int n, unsigned int r) {


if (n > 0)
return ((n % r) + foo(n / r, r));
else
return 0;
}
`

Let’s compute:

`foo(345, 10)`
Base `r = 10`, so this gives the sum of digits in base 10 of 345 → `3 + 4 + 5 = 12`

`foo(513, 2)`
This gives the sum of digits in base 2 of 513
`513 in binary = 1000000001` → sum of digits = `1 + 0 + ... + 1 = 2`

Final Answers:

`foo(345, 10) = 12`

`foo(513, 2) = 2`

Question 5: Merge Sort Comparisons

Let `n = 2^k`, number of elements.

Minimum and Maximum Comparisons:

Total comparisons in Merge Sort:


Comparisons happen during merging steps.

At each level, merging takes n – 1 comparisons in total.

So, for n elements:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/4
Minimum comparisons: `n log₂n - n + 1`

Maximum comparisons: `n log₂n`

(These are standard bounds for the number of comparisons in merge sort.)

✅ Summary:
Q.No Topic Final Answer

1 Max Heap Insertion `10, 8, 7, 3, 2, 1, 5`


2 Linked List Modification Function to move last node to front (code provided)
3 Operator Precedence Parse tree considering `↧ > ∨` and associativity
4 Recursive Function `foo(345, 10) = 12`, `foo(513, 2) = 2`
5 Merge Sort Comparisons Min = `n log₂n - n + 1`, Max = `n log₂n`

Let me know if you want diagrams or dry run examples for any of these!

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/4
DFS and AVL Explanation

Uploaded image
Uploaded image
Uploaded image
answer in details

Great! Let's go through the questions one by one in detail:

Question 6: DFS Node Discovery

Question: Evaluate how many nodes (apart from `s`) does the Depth First Search (DFS) algorithm
discover before discovering `t` when starting from `s`.

Approach:

From the image, the graph is layered. The nodes are named `u[i,j]` and the edges are directional.

DFS explores as deep as possible before backtracking.

Starting from `s`, DFS will go deep into one path until it reaches a dead end or the target.

Steps:

1. Starting at `s`, DFS might go:


`s → u[1,1] → u[2,1] → u[3,1] → u[4,1] → t`

2. Nodes discovered before `t`: `u[1,1]`, `u[2,1]`, `u[3,1]`, `u[4,1]`

Answer: 4 nodes

Question 7: AVL Tree Insertion

Lexicographic order input: Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec

Steps:

Insert elements into AVL Tree one by one.


After each insertion, check balance factors.
Perform rotations if balance factor of any node becomes > 1 or < -1.

AVL Insertion Order (Lexicographic):

markdown

`1. Apr
2. Aug
3. Dec
4. Feb
5. Jan
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/19
6. Jul
7. Jun
8. Mar
9. May
10. Nov
11. Oct
12. Sep
`

(You can visualize the AVL tree with rotations at every imbalance and compute balance factors at each
stage.)

Would you like a fully drawn tree with balance factors for each step?

Question 8: Stack and Queue Simulation

Given elements: a0 = 1, a1 = 5, a2 = 7, a3 = 8, a4 = 9, a5 = 2

Operations Summary:

Stack: LIFO
Queue: FIFO

Let me simulate the steps briefly:

Step I: Push into Stack:


`S: [1, 5, 7, 8, 9, 2]` (top is 2)
Step II: Enqueue into Queue:
`Q: [1, 5, 7, 8, 9, 2]`

Step III: Pop from Stack → `2`

Step IV: Dequeue from Queue → `1`

Step V: Pop from Stack → `9`


Step VI: Dequeue from Queue → `5`

Step VII: Dequeue (`7`) and push into stack


`Q: [8, 9, 2]`
`S: [1, 5, 7, 8, 9, 7]`

Repeat Step VII three times:

Dequeue `8`, push into `S` → `S: [..., 8]`, `Q: [9, 2]`

Dequeue `9`, push into `S` → `S: [..., 9]`, `Q: [2]`

Dequeue `2`, push into `S` → `S: [..., 2]`, `Q: []`

Step IX: Pop from Stack → `2`

Step X: Pop from Stack → `9`

Would you like a step-by-step table of stack and queue states?

Question 9: Cache Memory Access Time

Given:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 2/19
Cache = 100 ns

Main memory = 1000 ns

70% reads, 30% writes


Hit ratio for reads = 0.9

Write-through protocol

(a) Average access time (read only):

Average read access time =


`(hit ratio × cache time) + (miss ratio × (cache + memory))`
= `0.9 × 100 + 0.1 × (100 + 1000)`
= `90 + 110 = 200 ns`

(b) Average access time for both read/write:

Read: `70% × 200`


Write (write-through always updates both cache and memory):
Time = Cache time + Memory time = 100 + 1000 = 1100 ns
`30% × 1100`

Total = `0.7×200 + 0.3×1100 = 140 + 330 = 470 ns`

(c) Hit ratio with write cycles:


Since all writes go to memory (write-through), write hit doesn't save time.
So hit ratio mostly applies to reads: 0.9

Would you like detailed solutions for Questions 10 to 15 as well?

yes

Awesome! Let's go through Questions 10 to 15 in detail now.

Question 10: Cache Mapping Fields

2-way set associative cache

256 blocks
8 words per block
Main memory is addressable

16-bit address
4K blocks in memory

Goal: Calculate bits for TAG, BLOCK/SET, and WORD fields for various mapping techniques.

Memory Breakdown:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/19
Total addressable memory = 4K blocks × 8 words/block = 32K words

Memory size = 215 (since 32K = 2¹⁵), hence address size = 15 bits

WORD field (block offset):

Each block = 8 words = 23 ⇒ 3 bits for word

SET field (index within cache):

2-way set associative = Each set has 2 blocks


7
Total blocks = 256 ⇒ Sets = 256 / 2 = 128 = 2
⇒ 7 bits for set index

TAG field:

Total = 15 bits

TAG = 15 - 7 (set) - 3 (word) = 5 bits

Final Answer (for 2-way set associative):

TAG = 5 bits
SET/Block Index = 7 bits
WORD (Offset) = 3 bits

Would you like direct comparison with direct-mapped and fully associative too?

Question 11: Memory Map

RAM chip: 256 × 8

ROM chip: 1024 × 16


Needs: 2K RAM, 4K ROM, 4 units with 4 registers

RAM Size:

Needed: 2K bytes = 2048 bytes

Each chip = 256 bytes (256 × 8)


Number of RAM chips needed = 2048 / 256 = 8

ROM Size:

Needed: 4K bytes = 4096 bytes


Each chip = 1024 × 2 bytes = 2048 bytes

Number of ROM chips = 4096 / 2048 = 2

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/19
Memory Map in Hexadecimal:

Assuming RAM starts at `0x0000`

RAM range:
Size = 2K = 0x0800 ⇒
RAM: `0x0000` – `0x07FF`
ROM range:
Starts from `0x0800`, Size = 4K = 0x1000 ⇒
ROM: `0x0800` – `0x17FF`

I/O (interface units with 4 registers):


4 units × 4 = 16 registers = 16 bytes
Start at `0x1800` to `0x180F` (or next aligned address)

Question 13: 4-Way Set-Associative Cache

Given:

4-word blocks
1024 words total in cache

Memory size = 128K × 32

(a) Formulate data:

Block size = 4 words ⇒ 22 words/block

Cache size = 1024 words = 1024 / 4 = 256 blocks


4-way set associative ⇒ #sets = 256 / 4 = 64 sets

Address size:

Main memory size = 128K words = 217 addresses ⇒ 17-bit address

Break address:

WORD: 2 bits (within block)


SET: 64 = 26 ⇒ 6 bits
TAG: 17 - 6 - 2 = 9 bits

(b) Cache size in bytes:

Cache holds 1024 words


Assuming 32-bit word ⇒ 4 bytes per word
⇒ 1024 × 4 = 4096 bytes

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 5/19
Question 13: Turing Machine for Palindrome

Design a TM to check if a string is a palindrome.

Logic:

1. Read first symbol, move to end, check match with last symbol.
2. If match: mark both symbols (e.g., replace with X), return to start, repeat.

3. If mismatch or unmatched symbols left: reject.

Would you like a full state diagram or transition table?

Question 14: TM for equal 'a' and 'b' counts

Language: strings over {a, b} where number of a's = b's

Logic:

Find an `a`, replace with X, scan for `b`, replace with Y.


Repeat until no a/b left.

If unmatched a or b remains ⇒ Reject.

Again, would you like this in full TM format?

Question 15: TM to compute Fibonacci(n)

Create a TM to compute Fibonacci sequence given `n`.

Input:

`n` in unary (e.g., 3 = `|||`)

Output:

Fibonacci(n) in unary

Basic Idea:

Simulate a recursive Fibonacci function:

`fib(n) = fib(n-1) + fib(n-2)


`

Store intermediate results

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 6/19
Use multiple tracks or symbols to remember partial sums

Note: This TM will be very complex; often converted into 3-tape TM for efficiency.

Would you like me to write transition functions or draw Turing diagrams for Questions 13–15?
Let me know which parts you'd like expanded!

yes

Perfect! Let's go through Questions 13, 14, and 15 with transition functions and logic for the Turing
Machines (TMs) in detail.

✅ Question 13: Turing Machine to Determine if a String is a


Palindrome
Input Alphabet: {a, b}

Tape Alphabet: {a, b, X, Y, ␣}

We use `X` and `Y` to mark visited `a` and `b`.


Blank symbol = ␣

High-level Idea:

1. Match the first symbol with the last.

2. If both are same, mark them (e.g., X/Y).

3. Move inward and repeat.


4. If at the end all characters are matched, accept.

States:

`q0`: Start state – read the first unmarked symbol.


`q1`: Move to the right end of the tape to find matching symbol.

`q2`: Move back to the left.

`q_accept`, `q_reject`: Halting states.

Transition Functions (Partial):

Current State Read Symbol Write Move Next State

`q0` `a` `X` R `q1a`


`q0` `b` `Y` R `q1b`

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 7/19
Current State Read Symbol Write Move Next State

`q0` `X`/`Y` Same R `q0`


`q0` ␣ ␣ R `q_accept`

`q1a` `a`/`b` Same R `q1a`


`q1a` `␣` ␣ L `q2a`

`q2a` `a` `X` L `q3`

`q2a` `b` `b` - `q_reject`


`q3` `X`/`Y` Same L `q3`

`q3` ␣ ␣ R `q0`
(Similarly for `q1b`, `q2b`…)

This TM checks palindromes by marking and verifying matching symbols from both ends.

✅ Question 14: TM to Recognize Equal Number of `a` and `b`


Input Alphabet: {a, b}

Tape Symbols: {a, b, X, Y, ␣}

Logic:

1. Find an `a`, mark it (`X`), and search for a `b` to match → mark it (`Y`).
2. Repeat until no unmarked `a` or `b` remains.

3. If unpaired symbols remain → reject.

Transition Overview:

State Read Write Move Next

`q0` `a` `X` R `q1`


`q0` `b` `Y` R `q2`

`q0` ␣ ␣ - `q_accept`

`q1` `a`/`X`/`Y` Same R `q1`


`q1` `b` `Y` L `q3`

`q1` ␣ ␣ - `q_reject`
`q2` `b`/`Y`/`X` Same R `q2`

`q2` `a` `X` L `q3`

`q2` ␣ ␣ - `q_reject`
`q3` `X`/`Y`/`a`/`b` Same L `q3`

`q3` ␣ ␣ R `q0`

This TM loops until all symbols are paired.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 8/19
✅ Question 15: TM to Compute Fibonacci(n)
Input: `n` in unary (e.g., 3 = `|||`)

Output: Fibonacci(n) in unary (e.g., Fib(3) = 2 → `||`)

Use: Multi-track or symbols to simulate variables.

Logic Overview:

We simulate:

python

`fib(0) = 0
fib(1) = 1
fib(n) = fib(n-1) + fib(n-2)
`

Tape Layout (simplified):

Use multiple tape sections (or multiple tracks if available):

css

`Tape: | Input (n) | fib(n-2) | fib(n-1) | Result |


`

Steps:

1. Count input `n` in unary.

2. Store `fib(0)` as 0, `fib(1)` as 1.

3. For each iteration up to `n`, compute `fib(i) = fib(i-1) + fib(i-2)`


4. Output result in unary.

Transition Flow (Conceptual TM):

Use 3 regions on the tape:

Input counter `n`

Fibonacci accumulators (fib(n-1), fib(n-2))

Result

This TM can be optimized using:

Counters using symbols like `|`, `*`

Additions via copying symbols

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 9/19
Iterations via state machine loops

Would you like:

Full transition table for Fibonacci (very long)?

Or a simplified version in pseudocode or 3-tape TM description?

Let me know how detailed you want each!

Uploaded image
Uploaded image
answer in details

Great! Let’s break down and answer questions 16 to 25 in detail, one by one:

✅ Q16: PDA for Balanced Parentheses


Language: Strings with balanced `(` and `)`.

PDA Design (Conceptual):

Stack Alphabet: { `(` }


Logic:

On `(` → push `(` onto stack.

On `)` → pop from stack (if not empty).


Accept if stack is empty at end.

Transitions:

δ(q, `(`, Z) → (q, `(`Z)

δ(q, `(`, `(`) → (q, `((`)

δ(q, `)`, `(`) → (q, ε)


Accept if input ends and stack is empty.

✅ Q17: TM for Unary Addition (1s only)


Input: `1^n + 1^m` (e.g., `111+11`)
Assume format: `111B111` where `B` is a separator.

Logic:

1. Erase the `B` and combine the strings.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 10/19
2. Final result is unary of length `n + m`.

Transition Highlights:

q0: Read 1 → move right until B.

On B → overwrite B with 1 → move right to end of second group.


Halt at end with full string of 1s.

Example:

Input: `111B11`

Output: `11111`

✅ Q18: Write-through Cache


Given:

Cache access time: 100ns

Main memory access time: 500ns


Hit ratio = 90%

Read: 70%, Write: 30%

(a) Avg Read Time:

For read-through:

Hit: 100 ns

Miss: 100 + 500 = 600 ns


AM AT = 0.9 × 100 + 0.1 × 600 = 90 + 60 = 150ns

(b) Avg Write Time:

Write-through = write to both cache and memory.

Hit or Miss → always write to both. AM AT = 100(cache) + 500(main) = 600ns

(c) Overall Avg Access Time:

= 0.7 × 150 + 0.3 × 600 = 105 + 180 = 285ns

(d) Effective Hit Ratio:

Since write always goes to main memory → only read hits count.
Eff ective Hit Ratio = 0.7 × 0.9 = 0.63
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 11/19
✅ Q19: CPU Access Time (200 Accesses)
Hits = 160, Misses = 40

Cache access time = 10 ns


Main memory = 100 ns

Avg Access Time:

Total Time = (160 × 10) + (40 × (10 + 100)) = 1600 + 4400 = 6000ns
Average = 6000/200 = 30ns

✅ Q20: DRAM Timing


Given:

DRAM = 256M × 4-bit = 1 GB (since 256M × 4-bit = 1 GB)


Rows = 2¹⁴ = 16384 rows
Refresh = every 2 ms

Each row takes 50 ns to refresh

Total refresh time:

16384 × 50ns = 819200ns = 0.8192ms

Available time:

= 2ms–0.8192ms = 1.1808ms

Percentage for read/write:

1.1808
2
​ × 100 ≈ 59

✅ Q21: Interrupt Handling Process


Steps in Interrupt Handling:

1. Interrupt occurs
2. CPU finishes current instruction

3. Disable further interrupts


4. Save current state (PC, registers)
5. Identify interrupt source (vectored or polling)

6. Call appropriate ISR (Interrupt Service Routine)


7. Execute ISR

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 12/19
8. Restore saved state
9. Resume execution

Roles of ISR:

Handle hardware events (I/O, timer, etc.)


Maintain efficiency and responsiveness

Must be short and fast

✅ Q22: FSM for Pattern Matching


(a) String contains `101`

Design a DFA with states:

q0 → start
q1 → seen `1`
q2 → seen `10`

q3 → accept on `101`

(b) For `00101`:

More states to account for prefix and full match.

✅ Q23: SQL Queries


Given Tables:
`Employees(emp_id, emp_name, department_id, salary)`
`Departments(department_id, department_name, location)`
`Projects(project_id, project_name, emp_id, start_date, end_date)`

(a) Employees on >1 project:

sql

`SELECT e.emp_name, d.department_name


FROM Employees e
JOIN Projects p ON e.emp_id = p.emp_id
JOIN Departments d ON e.department_id = d.department_id
GROUP BY e.emp_id, e.emp_name, d.department_name
HAVING COUNT(p.project_id) > 1;
`

(b) Salary > department average:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 13/19
sql

`SELECT e.emp_name
FROM Employees e
JOIN (
SELECT department_id, AVG(salary) AS avg_salary
FROM Employees
GROUP BY department_id
) dept_avg ON e.department_id = dept_avg.department_id
WHERE e.salary > dept_avg.avg_salary;
`

✅ Q24: Top 5 Customers in Last 6 Months


sql

`SELECT customer_id, SUM(amount) AS total_spent


FROM Sales
WHERE sale_date >= CURRENT_DATE - INTERVAL '6 months'
GROUP BY customer_id
ORDER BY total_spent DESC
LIMIT 5;
`

Efficiency Tips:

Use index on `sale_date` and `customer_id`


Use materialized views if queried often

✅ Q25: Architecture of 8085 Microprocessor


You can draw the block diagram with:

ALU
Registers (A, B, C, D, E, H, L)

Instruction Register & Decoder


Program Counter (PC)
Stack Pointer (SP)

Control Unit
Address/Data bus

Let me know if you’d like a diagram or explanation of each block!

Let me know which questions you'd like visual diagrams, FSMs, or Turing machines drawn for!

26)Discuss in detail the design and working of a RISC processor.


27)Analyze different types of memory mapping techniques.
28)Demonstrate with a transition diagram that if length of a string exceeds some specific value,

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 14/19
then pumping occurs.
On which characteristic of the machine does this value depend? Does pumping occur even
otherwise? Demonstrate.
29)Why BCNF is stricter than 3NF in RDBMS? Justify your answer.
30)Show with an example that Canonical cover of a set of Functional Dependencies may not be
unique.
31)Relation R has eight attributes (ABCDEFGH) fields of R contains only atomic values. F = {CH -> G, A
-> BC, B -> CFH, E -> A, F -> EG} is a set of functional dependencies (FDs). How many candidate keys
does the relation R have (Need to show every steps) ?
32)Consider a relation scheme R = (A, B, C, D, E, H) on which the following functional dependencies
hold: {A–>B, BC–> D, E–>C, D–>A}. What are the candidate keys of R (Need to show every steps)?

Great! Let's answer questions 26 to 32 in detail:

✅ Q26: Design and Working of a RISC Processor


RISC (Reduced Instruction Set Computer):

Key Design Features:

Small, highly optimized instruction set.


Single-cycle instruction execution.
Load/store architecture: memory is accessed only via `load` and `store`.

Large number of registers.


Pipelining is easy to implement.

Working:

1. Fetch: Instruction is fetched from memory.


2. Decode: Instruction is decoded (simple formats).

3. Execute: Performed in ALU (Arithmetic/Logic Unit).


4. Memory Access: Only for `load/store` instructions.
5. Write Back: Result is written to a register.

Advantages:

Faster execution (fewer clock cycles).

Efficient pipelining.
Simpler control unit (hardwired).

✅ Q27: Types of Memory Mapping Techniques


1. Direct Mapping:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 15/19
Each block of main memory maps to exactly one cache line.
Fast, but causes more collisions.

2. Fully Associative Mapping:

A block can go anywhere in the cache.


More flexible, but needs more hardware.

3. Set-Associative Mapping:

Compromise: N-way set associative → block maps to N locations.

Balance between performance and complexity.

Feature Direct Fully Assoc. Set-Assoc.


Flexibility Low High Medium

Speed High Low Medium


Hardware complexity Low High Medium

✅ Q28: Pumping Lemma & Transition Diagram


Statement:

If a language is regular, then any string `s` in the language of length ≥ p (pumping length) can be split
into `xyz` such that:

`|y| > 0`

`|xy| ≤ p`

`∀ i ≥ 0`, `xy^i z ∈ L`

Depends On:

p is based on the number of states in the finite automaton (DFA/NFA).

Diagram:

For string length > number of states, pigeonhole principle → a state must repeat → loop → pumping.

Example:

For L = {aⁿbⁿ | n ≥ 0} (not regular), pumping lemma fails.


But for L = {aⁿ | n ≥ 0}, pumping holds.

Does pumping occur otherwise?

Yes, for strings longer than p, repetition implies loop → pumping.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 16/19
✅ Q29: Why BCNF is Stricter than 3NF
3NF Condition:

For FD `X → A`, it must satisfy one:

X is a superkey, or

A is a prime attribute.

BCNF Condition:

For every FD `X → A`, X must be a superkey.

Why stricter?

BCNF eliminates all redundancy due to FDs.


3NF may allow some redundancy if the RHS is a prime attribute.

Example:

Relation R(A, B, C), FDs: A → B, B → C


Candidate Key: A

In 3NF: B → C is allowed (C is not prime, B is not superkey) → valid 3NF

But not in BCNF.

✅ Q30: Canonical Cover Not Unique


Canonical Cover:

Minimal set of FDs that implies all original FDs.

Example:

Let F = { A → B, A → C }

Two possible covers:

{ A → BC }
{ A → B, A → C }

Both are minimal and imply the same closure, but structurally different.
Thus, canonical cover may not be unique.

✅ Q31: Relation R(ABCDEFGH), FDs = {CH → G, A → BC, B → CFH, E → A, F → EG}


Step 1: Attribute Closure
Start by finding closures to identify candidate keys.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 17/19
Try E+:

E→A
A → BC
B → CFH
CH → G
⇒ E+ = {E, A, B, C, F, H, G}

Still missing D → Not a candidate key.


Try E and D:
ED+ = {E, D, A, B, C, F, H, G} = all attributes ⇒ ED is a candidate key

Check for other possibilities:


Try B+: B → CFH
C and H → G
→ B+ = {B, C, F, H, G} – not complete
Try AE: A → BC
E→A
... similar to E+

Only ED+ gives full closure.


✅ Candidate Key: ED
Only one candidate key in this case.

✅ Q32: R = (A, B, C, D, E, H), FDs = {A→B, BC→D, E→C, D→A}


Step 1: Attribute Closure

Try E and B:
E→C
BC → D
D→A
A→B
So E, B → C → D → A → B

EB+ = {E, B, C, D, A}
Still missing H
Try E, B, H:

→ EBH+ = {E, B, H, C, D, A} → includes everything!


✅ Candidate Key: EBH
Check for minimality:

Remove H? EB+ missing H ⇒ Not minimal


Remove B? EH+ = {E, H, C, D, A, B} ⇒ Still works!

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 18/19
✅ EH is also a candidate key
Check for more:

Try E and C: E → C
No path to B
Final check: ✅ Candidate Keys: EH and EBH
Let me know if you'd like diagrams, closures, or SQL examples for any specific question!

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 19/19
Normalization and Functional Dependencies

answer in details

Here are the detailed answers to questions 33 to 39 based on the image:

33. Normal Form of Table R

Given:

Functional Dependencies:
F1 → F3

F2 → F4
(F1, F2) → F5

Step 1: Identify Candidate Keys

To determine the candidate key, check which attributes determine all others.
F1 and F2 together determine F3, F4, F5.

Thus, (F1, F2) is the candidate key.


Step 2: Check Normal Forms

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/15
1NF: No multivalued or repeating groups — assumed true.
2NF: No partial dependency on a part of the primary key.

F1 → F3 and F2 → F4 are partial dependencies (F1, F2 is the key), so not in 2NF.


So, it's only in 1NF.

Answer:
The highest normal form is 1NF because there are partial dependencies on the primary key (F1, F2),
which violates the condition of 2NF.

34. Highest Normal Form of Student_Performance Relation

Given:

Functional Dependencies:

name, courseNo → grade

rollNo, courseNo → grade


name → rollNo

rollNo → name

Step 1: Attribute Analysis

name ↔ rollNo indicates a one-to-one relationship.

Either `name` or `rollNo` can represent the student.

Candidate Key:

Since both name and rollNo are interchangeable:


{rollNo, courseNo} or {name, courseNo} are candidate keys.

Check Normal Forms:

1NF: Satisfied.

2NF: No partial dependencies — each non-prime attribute (grade) is fully functionally dependent
on the full key.

3NF: Check for transitive dependency.


name → rollNo, and rollNo, courseNo → grade → this could be transitive.

But grade is not dependent on name directly; it depends on the combination.

No transitive dependencies.
Hence, relation is in 3NF.
Answer:
The relation is in 3NF. All non-prime attributes are fully functionally dependent on the candidate key,
and no transitive dependencies exist.

35. Define and Differentiate Keys with Examples


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 2/15
Key Type Definition Example

Uniquely identifies each record in a table.


Primary Key `Student(rollNo PRIMARY KEY, name)`
Cannot be NULL.

Candidate Minimal set of attributes that can `rollNo` and `studentID` can both be candidate
Key uniquely identify a tuple. keys

Set of attributes that can uniquely identify


Super Key `{rollNo, name}`
a tuple, not necessarily minimal.

Attribute in one table that refers to the `Enrollment(studentID FOREIGN KEY


Foreign Key
primary key in another table. REFERENCES Student(rollNo))`

Ensures all values in the column are


Unique Key `email` in `User(email UNIQUE)`
different, allows NULL (once).

36. Algorithm to Find Closure of Attribute Set

plaintext

`Input: Set of attributes α, set of functional dependencies F


Output: Closure of α: α+

1. α+ := α
2. repeat
3. for each functional dependency X → Y in F do
4. if X ⊆ α+ then
5. α+ := α+ ∪ Y
6. end if
7. until no change in α+
8. return α+
`

Example:
F = { A → B, B → C }
Find closure of A:

A+ = {A}

A → B → add B

B → C → add C
A+ = {A, B, C}

37. Need/Application of Canonical Cover in RDBMS

Canonical Cover:
A minimal set of functional dependencies that preserves the original dependencies without redundancy.
Applications:

Minimizing redundancy in FDs.


Used in schema decomposition to avoid anomalies.

Helps simplify query optimization and normalization.


Reduces computational complexity in closure and checking equivalence.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/15
38. Dense vs Sparse Index

Dense Index:

Has an index record for every search-key value in the data file.
Faster for searching but takes more space.

Sparse Index:

Has index records for some search-key values.


Less space but slightly slower search.

When to use Dense Index:

When the file is unsorted or requires fast access.

For small data or fields frequently queried.


Example: Primary index on unsorted table, or secondary indexes.

Answer:
Dense index is preferable when fast retrieval is required and storage overhead is not a major concern.

39. Compare I/O Communication Techniques

Technique Description Pros Cons


CPU actively waits and checks CPU wastage due to
Programmed I/O Simple to implement
device status busy-waiting
More complex
Interrupt-driven I/O Device sends interrupt when ready Efficient CPU usage
handling
DMA (Direct Memory Data is transferred directly between Very efficient for Hardware complexity
Access) device and memory bulk transfer and cost

Comparison:

Programmed I/O: Suitable for simple, slow devices.


Interrupt-driven: Balanced for multi-tasking systems.

DMA: Best for high-speed data transfer with minimal CPU usage.

Let me know if you want these as a formatted document or need explanations in simpler terms!

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/15
Here are detailed answers for questions 40 to 50 from your uploaded image:

40. Types of Instruction Formats with Examples

Instruction Formats define how instructions are represented in binary. Types include:

1. Zero Address (Stack-based): No operand field.


Example: `ADD` (pop two operands from stack, push result)
2. One Address: One operand in instruction, other in accumulator.
Example: `ADD X` (AC ← AC + M[X])

3. Two Address: One operand is both source and destination.


Example: `ADD R1, R2` (R1 ← R1 + R2)
4. Three Address: Specifies two source operands and one destination.
Example: `ADD R1, R2, R3` (R1 ← R2 + R3)
5. Register and Immediate formats:
Example: `MOV R1, #10` (R1 ← 10)

41. Pipeline Processing & Methods to Improve Performance

Pipelining: Technique where multiple instruction phases are overlapped.


Example: 5-stage pipeline – IF, ID, EX, MEM, WB.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 5/15
Performance Improvement Methods:

Instruction Pre-fetching

Branch Prediction
Out-of-Order Execution
Hazard Detection and Resolution

Super-scalar Architecture
Using buffers/register forwarding

42. Paging and Segmentation in Virtual Memory

Paging:

Divides memory into fixed-size pages (logical) and frames (physical).

Address: Page number + offset.

Segmentation:

Divides memory into variable-sized segments (code, data, stack).


Address: Segment number + offset.

Advantages:

Paging: No external fragmentation.

Segmentation: Logical view of memory, easier protection.

Disadvantages:

Paging: Internal fragmentation.

Segmentation: External fragmentation, complex allocation.

43. RAID Levels with Case Study

RAID (Redundant Array of Independent Disks):

Combines multiple disks to improve performance/fault tolerance.

Common Levels:

RAID 0: Striping (speed, no redundancy)


RAID 1: Mirroring (high reliability)

RAID 5: Block-level striping with parity (balance of speed & safety)

RAID 6: Dual parity

Case Study:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 6/15
A data center using RAID 5 for 5 disks. If one fails, data is rebuilt using parity.

44. Multiprocessor System Working & Advantages

Working:

Multiple CPUs share memory and tasks.

Operate in parallel, increasing throughput.

Advantages:

Increased performance

Reliability (fault tolerance)

Resource sharing
Better throughput for multitasking systems

Analysis:

SMP (Symmetric multiprocessing)

Shared or distributed memory architecture

45. Memory Module Design

Given:

Total = 64 KB = 64 * 1024 = 65536 Bytes

Chip size = 8 KB = 8192 Bytes

Number of Chips = 65536 / 8192 = 8 Chips

Memory Organization:

Arrange 8 chips in parallel (1 chip for each 8 KB block).

Use address lines A0 to A12 for addressing within a chip (13 bits = 8KB)
Use additional 3 bits (A13 to A15) for chip selection.

46. Cache Design Question

Given:

Cache Size = 16 KB = 16384 Bytes

Block Size = 64 Bytes

Set-Associative: 4-way

(a) Number of blocks = 16384 / 64 = 256 blocks


Number of sets = 256 / 4 = 64 sets

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 7/15
(b) For 32-bit address:

Offset = log₂(64) = 6 bits


Index = log₂(64) = 6 bits

Tag = 32 - 6 - 6 = 20 bits

47. CPI and Execution Time

Given:

Clock = 2 ns

1 million instructions = 10⁶

CPI Calculation: = (5×0.3 + 4×0.2 + 2×0.4 + 3×0.1)


= (1.5 + 0.8 + 0.8 + 0.3) = 3.4

CPU Time = Instruction count × CPI × Cycle time


= 10⁶ × 3.4 × 2 ns = 6.8 ms

48. Secondary Storage & Pipeline Execution

Secondary Storage:

Non-volatile storage (e.g., HDD, SSD)

Stores data permanently, unlike RAM.

Floating-point pipeline:

4 processors
Each takes 40 ns

Time for 400 operations (ideal case):

Pipeline latency = 4 × 40 = 160 ns

After filling, 1 result every 40 ns

Total = 160 + (400 - 1) × 40 = 16,120 ns = 16.12 µs

49. Program Counter (PC) & Execution Time

Program Counter (PC):

Holds address of next instruction to be executed.


Automatically increments (or updated by branch/jump).

4 processors, 50 ns cycle time, 200 ops:

Latency = 4 × 50 = 200 ns

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 8/15
Time = 200 + (200 - 1) × 50 = 10,150 ns = 10.15 µs

50. What is EPROM?

EPROM (Erasable Programmable Read-Only Memory):

Non-volatile memory

Data retained when power is off


Can be erased with UV light and reprogrammed

Use in processors:

Store firmware or system boot code

Can be updated with new data if needed

Let me know if you want all this compiled into a PDF/Word document or need a simplified version!

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 9/15
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 10/15
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 11/15
answer in details

Here are detailed answers to questions 50–60 based on your images:

50. What is EPROM? How long for 200 operations on 4 processors with 10ns cycle
time?

EPROM (Erasable Programmable Read-Only Memory):

Non-volatile memory.
Data can be erased using UV light and reprogrammed.
Used for storing firmware.

Pipeline Execution Time:

4-stage pipeline

Cycle time = 10ns


Latency = 4 × 10 = 40 ns
Time for 200 ops = 40 + (200 - 1) × 10 = 2030 ns = 2.03 µs

51. Pre-order to Post-order Traversal

Pre-order: 30, 20, 10, 15, 25, 23, 39, 35, 42


Step 1: Construct BST from Pre-order

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 12/15
markdown

` 30
/ \
20 39
/ \ / \
10 25 35 42
\ /
15 23
`

Post-order traversal:
10, 15, 23, 25, 20, 35, 42, 39, 30

52. Prove: Sum of Degrees = 2 × Number of Edges

Proof:

Each edge connects 2 vertices.

So each edge contributes 2 to the total degree count.


Let E = number of edges.
Then: ∑ deg(v) over all vertices = 2E

Example:

mathematica

`Graph:
A—B—C

Edges = 2
Deg(A)=1, Deg(B)=2, Deg(C)=1
Sum = 1+2+1 = 4 = 2×2
`

53. Post-order to Pre-order Traversal

Post-order: 3, 5, 7, 9, 4, 17, 16, 20, 18, 15, 14


Step 1: Construct BST (Reverse of post-order gives root first, then right subtree, etc.)

Resulting Pre-order (after building BST): 14, 15, 9, 4, 3, 5, 7, 18, 16, 17, 20

54. Finite Automata for String Search

(a) Design FA to detect substring “101”

States: S0 (start), S1 (after 1), S2 (after 10), S3 (accept 101)


Transitions:

S0 —1→ S1
S1 —0→ S2
S2 —1→ S3
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 13/15
Design for 00101: Add more states accordingly.
(b) Algorithm to build FA from substring:

1. Build prefix function (like KMP).

2. States = prefix length.


3. Transitions based on matching prefix and characters.

55. NFA to DFA for strings ending in "01"

NFA Design:

States: Start, seen ‘0’, seen ‘01’


Accept if final state reached with last two symbols = 0→1

DFA Conversion:

Build full table of transitions from every possible state on input {0,1}

56. (a) FA for strings with at least two 0s

States: q0 (0s=0), q1 (0s=1), q2 (0s≥2)


Transitions:

q0 —0→ q1, q1 —0→ q2


q0 —1→ q0, q1 —1→ q1, q2 —1/0→ q2
Accepting state = q2

(b) FA that does NOT end in 01:

Track last 2 bits

Reject if last = “01”

57. (a) FA for even number of 0s and 1s

4 states for tracking:


q00 (even 0s, even 1s)
q01 (even 0s, odd 1s)

q10 (odd 0s, even 1s)


q11 (odd 0s, odd 1s)

(b) Accept strings matching aⁿbⁿ (n ≥ 0): This is a Context-Free Language, not regular, so FA cannot
recognize it.

58. (a) DFA for numbers divisible by 3 (binary)

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 14/15
Use modulo-3 tracking states:
q0: remainder 0 (accepting)
q1: remainder 1

q2: remainder 2
Transitions based on binary input (0/1)

(b) Draw NFA for:


(0+1)* (010(0+1)*101 + 101(0+1)010)(0+1)

Use union and concatenation of sub-NFAs

59. Reversal Closure in FA

Reversal Closure Proof:

For each FA M accepting L, construct FA M' that:


Reverses all transitions
Makes accepting states as start

Original start state becomes accepting

Hence, FA is closed under reversal.

60. Intersection Closure of FA

(a) Given M1 accepts x, M2 accepts y:


Build FA that accepts x#y

Combine two FAs, add symbol ‘#’ transition

(b) Intersection Construction:

For M1 = (Q1, δ1, q01, F1), M2 = (Q2, δ2, q02, F2)


Construct M = (Q1 × Q2, δ, (q01,q02), F1×F2)

δ((q1,q2), a) = (δ1(q1,a), δ2(q2,a))


Hence, FA is closed under intersection.

Let me know if you want diagrams or automata drawn for any question, or a Word/PDF version
compiled!

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 15/15

You might also like