0% found this document useful (0 votes)
7 views13 pages

Os Cycle Test 2 Answer Key

The document provides an answer key for a Cycle Test on Operating Systems for the academic year 2024-2025. It includes questions on various topics such as user and kernel threads, race conditions, deadlock avoidance methods, monitors, address binding, and memory management techniques like partitioned allocation and paging. Additionally, it contains pseudocode for semaphore operations and explanations of logical to physical address translation using segmentation.

Uploaded by

vishnupriyapacet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views13 pages

Os Cycle Test 2 Answer Key

The document provides an answer key for a Cycle Test on Operating Systems for the academic year 2024-2025. It includes questions on various topics such as user and kernel threads, race conditions, deadlock avoidance methods, monitors, address binding, and memory management techniques like partitioned allocation and paging. Additionally, it contains pseudocode for semaphore operations and explanations of logical to physical address translation using segmentation.

Uploaded by

vishnupriyapacet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Academic year 2024-2025 Even Semester

ANSWER KEY

Course Code : 22CAPC402 Cycle Test:II


Course Name : OPERATING SYSTEMS Year/Sem:II/IV

PART-A(10*2=20)

ANSWER ALL THE QUESTIONS

1. Distinguish between user threads and kernel threads.

User-level threads are unknown by the kernel, whereas the kernel is aware of kernel
threads.
User threads are scheduled by the thread library and the kernel schedules kernel
threads.
Kernel threads need not be associated with a process whereas every user thread
belongs to a process the kernel can schedule another thread in the application for execution.
2. What is race condition?
When several process access and manipulate same data concurrently, then the
outcome of the execution depends on particular order in which the access takes place is called
race condition. To avoid race condition, only one process at a time can manipulate the shared
variable.
3. List the methods for deadlock avoidance?
1. Banker's Algorithm
2. Resource Allocation Graph (RAG)
3. Wait-For Graph
4.Define monitors.
Monitors are a high-level synchronization construct used in concurrent programming
to manage access to shared resources in a way that prevents race conditions and ensures
mutual exclusion. A monitor provides a way to encapsulate shared data, operations, and
synchronization mechanisms, allowing processes to safely access and modify the shared data.

5.Narrate about the resource allocation graph with example.


A Resource Allocation Graph (RAG) is a graphical representation used to model the
allocation of resources to processes in a system. It's commonly used for deadlock detection
and avoidance, particularly in operating systems.
 Process Nodes
 Resource Nodes
 Assignment Edge
 Request Edge
6.What are the requirements that a solution to the critical section problem must satisfy?
 Mutual exclision
 Progress
 Bounded Waiting
7. Outline about Address binding.
Address binding is the process of mapping logical (or virtual) addresses used by a
program to physical addresses in computer memory. There are different stages and methods
of address binding, depending on whether the program is in the source code, compiled, or
executed.
 Compile-time Binding:
 Load-time Binding:
 Execution-time Binding (or Dynamic Binding):
8. List two differences between logical and physical addresses
Logical Address (Virtual Address): A logical address is the address generated by the CPU
during program execution. It refers to the address in the program's view of memory, which is
independent of the actual physical memory location.
Physical Address: A physical address is the actual address in the computer's physical
memory (RAM). It corresponds to a location in the hardware memory that holds the data being
processed.
Logical Address: It is used by programs and is mapped to a physical address by the operating
system, typically through a mechanism like paging or segmentation, involving a Memory
Management Unit (MMU).
Physical Address: It is the actual location in memory, after the logical address has been
translated by the MMU. The physical address is what the hardware (RAM) directly accesses.

9. What is overlay?
Overlay is a memory management technique used in computing to allow programs
that are larger than the available physical memory to run on systems with limited memory
resources. It involves dividing a program into smaller, more manageable pieces called
overlays, and loading only the necessary parts of the program into memory at any given time.
This helps optimize memory usage by swapping pieces of the program in and out of memory
as needed.
10.What is the purpose of paging the page table?
The purpose of paging the page table is to handle large address spaces efficiently,
especially in systems with virtual memory. In systems that use paging for memory
management, paging the page table helps address some of the limitations and inefficiencies
related to large, fixed-size page tables.

PART-A(3*10=30)
ANSWER ANY THREE QUESTIONS

11.Show how wait() and signal() semaphore operations could be implemented in


Multiprocessor System using set and Test instruction. The solution should exhibit
minimal busy waiting. Develop a pseudocode for implementing the operations.

To implement the wait() and signal() semaphore operations in a multiprocessor


system using set and test instructions, we need to design a solution that minimizes busy
waiting. The test-and-set instruction is an atomic operation that reads a memory location and
sets it to a specified value in a single atomic step. The set instruction simply sets a value at a
memory location.

Goal:

 Minimize busy waiting, meaning a process should not continuously check the condition in a
tight loop.
 Ensure that the semaphore operations are atomic, meaning they can work correctly in a
multiprocessor system.

Semaphore Operations:

 wait(semaphore): If the semaphore is greater than 0, decrement it and continue; otherwise,


block the process.
 signal(semaphore): Increment the semaphore and unblock a waiting process if any.

Test-and-Set:

 The test-and-set operation performs a read-modify-write cycle atomically.


 If test-and-set is used to implement wait(), it can allow a process to check if the semaphore is
available and atomically change its state.

Pseudocode for wait() and signal() Using Test-and-Set:

Global Variables:

 semaphore: Integer initialized to some value (the number of available resources or permits).
 lock: Binary semaphore (usually 0 or 1) used to ensure mutual exclusion for accessing the
semaphore.
Test-and-Set Instruction:

function test_and_set(variable):
old_value = variable
variable = 1 // Set variable to 1
return old_value

wait(semaphore) Operation:

1. Acquire the lock: Use the test-and-set instruction to ensure mutual exclusion when
modifying the semaphore.
2. Check the semaphore value: If the semaphore is greater than 0, decrement it and release the
lock.
3. Block the process if necessary: If the semaphore is 0, the process should wait until it is
signaled by another process.

function wait(semaphore):
while test_and_set(lock) == 1:
if semaphore > 0:
semaphore = semaphore - 1
else:
lock = 0

signal(semaphore) Operation:

1. Acquire the lock: Again, use the test-and-set instruction to ensure mutual exclusion.
2. Increment the semaphore: When a process signals, it increments the semaphore.
3. Unblock a waiting process: If there are any processes waiting, unblock one of them (if
needed).

function signal(semaphore):
while test_and_set(lock) == 1:
semaphore = semaphore + 1
unblock_process()
lock = 0

Explanation of Key Concepts:

test_and_set(lock): This atomic operation ensures mutual exclusion. It checks and


sets the lock variable in a single atomic step to avoid race conditions. If the lock is
already set (i.e., 1), the process has to wait before it can proceed to the critical section.
Semaphore: The semaphore variable is shared between processes, so it must be
protected using the lock. The semaphore is modified only in the critical section, which
ensures correct synchronization.

Blocking and Unblocking: Processes that cannot proceed (because the semaphore is
0) are blocked and placed in a waiting queue. The blocking mechanism is dependent
on the specific system or operating environment.

Minimal Busy Waiting:

The wait() operation contains a busy-wait loop while attempting to acquire the lock.
However, once the lock is obtained, the semaphore value is checked and modified without
busy-waiting. If the semaphore is 0, the process is blocked (waiting on an event or signal),
which reduces busy waiting.

The signal() operation also contains a busy-wait loop for acquiring the lock but avoids busy
waiting during the semaphore modification. If there are blocked processes, one of them will
be unblocked, allowing for efficient resource utilization.

This approach minimizes busy waiting while ensuring synchronization in a multiprocessor


system.

12.Consider the following questions based on the banker’s algorithm:


Allocation Max Available
ABCD ABCD ABCD
P1 0012 0012 1520
P2 1000 1750
P3 1354 2356
P4 0632 0652
P5 0014 0656
(1) What is the content of the matrix Need?
(2) Is the system in a safe state?

Solution

(1) What is the content of the matrix Need?

(2) Is the system in a safe state?


13.With a neat sketch , explain how logical address is translated into physical address

using Segmentation mechanism

Translation of Logical Address to Physical Address Using Segmentation

In a computer system, segmentation is a memory management scheme that divides a


program's memory into different segments, such as code, data, and stack segments. Each
segment has a logical address, which needs to be translated into a physical address to access
the data in memory.

The segmentation mechanism uses a segment table to perform the translation of


logical addresses (also called virtual addresses) into physical addresses. Each entry in the
segment table contains a base address and a limit.

Logical Address:

A logical address in a segmented system consists of two parts:

1. Segment Number (S): Identifies which segment is being referred to (e.g., code, data, or
stack).
2. Offset (O): The offset within that segment, specifying the exact position in the segment.

Physical Address:

The physical address is the actual address in physical memory where the data is located.

Segmentation Mechanism Overview:

Segment Table: Contains the base address (starting point) and limit (length) for each
segment. The base address is the starting address in physical memory, while the limit
specifies the maximum size of the segment.

Logical Address: The logical address is split into two parts

1. Segment Number (S): Refers to the segment in the segment table.


2. Offset (O): Refers to the position within the segment.

Steps to Translate Logical Address to Physical Address:

Extract the Segment Number (S): The segment number is used as an index into the
segment table to locate the corresponding segment descriptor.

Look Up the Base Address and Limit: Using the segment number, retrieve the base
address and limit for that segment from the segment table.

Check the Offset: Ensure that the offset (O) is within the segment's limit. If the offset
is greater than the limit, a segmentation fault occurs because the program is trying to
access memory outside its allowed range.

Calculate the Physical Address: The physical address is calculated by adding the
offset (O) to the base address of the segment:

The result gives the actual location in physical memory where the data resides.

Example:

Let’s consider a logical address represented as (S, O), where:

 S = 2 (Segment number)
 O = 1000 (Offset)

Assume the segment table contains the following information:

 Segment 2: Base address = 5000, Limit = 2000

Now, the translation process is as follows:

1. The segment number (S) is 2. Use it to look up the segment table.


2. The base address for segment 2 is 5000, and the limit is 2000.
3. The offset (O) is 1000, which is within the limit of 2000, so no error occurs.
4. The physical address is calculated: Physical Address=5000+1000=6000\text{Physical
Address} = 5000 + 1000 = 6000Physical Address=5000+1000=6000

Thus, the logical address (2, 1000) maps to the physical address 6000.

Neat Sketch for Segmentation Translation

Here's a simple sketch to illustrate the logical-to-physical address translation using


segmentation:

Key Points:

 Segmentation divides memory into segments, which can have different sizes.
 The logical address consists of a segment number and an offset.
 The segment table maps the segment number to a base address and a limit.
 The offset is checked to ensure it's within the segment's boundaries.
 The final physical address is calculated by adding the offset to the base address of the
segment.

This system allows for more flexible memory allocation compared to paging and is
particularly useful for organizing programs into logical units like code, data, and stack.

14.Explain about given memory management techniques. (i) Partitioned allocation (ii)
Paging and translation look-aside buffer

(i) Partitioned Allocation

Partitioned allocation is a method of memory management where physical memory


is divided into fixed-sized or variable-sized partitions, and processes are allocated to one of
these partitions. The partitions can be of different types based on the size and allocation
strategy.
Types of Partitioned Allocation:

Fixed Partitioning:

1. In fixed partitioning, the physical memory is divided into a set number of


fixed-sized partitions. Each partition holds one process.
2. The partitions are created when the system starts, and the size of each partition
is predetermined and fixed.
3. If a process is smaller than the partition size, the remaining space in the
partition is wasted (internal fragmentation).
4. If a process is larger than the partition size, it cannot be accommodated
(external fragmentation).

Example:

1. Suppose we have a system with 4 partitions, each 1 GB in size, and we have


processes of 512 MB, 1 GB, and 2 GB. The process of 512 MB will occupy
one partition, but there will be 512 MB of unused space (internal
fragmentation).
2. The process of 2 GB will not fit into any of the partitions.

Dynamic Partitioning:

1. In dynamic partitioning, memory is divided dynamically at runtime, where


each partition is allocated based on the size of the process.
2. This allows better utilization of memory compared to fixed partitioning, but it
leads to external fragmentation.
3. External fragmentation occurs when free memory is scattered into small
chunks, making it difficult to allocate memory even though there is enough
total free memory available.

Example:

1. If a process of 600 MB is allocated, the memory is partitioned dynamically to


accommodate that process, and the remaining free space can be used for other
processes.

(ii) Paging

Paging is a memory management technique that avoids both internal and external
fragmentation by dividing the physical memory and logical memory into fixed-sized blocks
called pages (in the logical memory) and frames (in the physical memory).

Key Concepts:

 Page: A fixed-size block of logical memory (e.g., 4 KB).


 Frame: A fixed-size block of physical memory, which is the same size as a page.
 Page Table: A data structure that maps the logical page number to the physical frame
number. Each process has its own page table.
 Logical Address: Consists of a page number and an offset within the page.
 Physical Address: Consists of a frame number and the same offset.

Paging Process:

1. When a process needs memory, it is divided into pages.


2. Each page is mapped to a frame in physical memory. If a process requires more
memory, additional pages are allocated to it in different physical frames.
3. The page table keeps track of where each page is stored in physical memory.
4. The logical address generated by the CPU is divided into two parts: the page number
and the offset.
5. The page number is used to index the page table, and the corresponding frame number
is obtained. The physical address is formed by combining the frame number and the
offset.

Example of Paging:

Let’s assume:

 The logical address consists of 10 bits for the page number and 12 bits for the offset
(so the page size is 212=4 KB2^{12} = 4 \, \text{KB}212=4KB).
 The physical address is split into 14 bits for the frame number and 12 bits for the
offset.

If the logical address is Page Number = 4 and Offset = 300, we would:

 Use the page table to find which frame corresponds to Page 4.


 Assume Page 4 maps to Frame 2.
 The physical address will be formed by combining Frame 2 and the Offset (300).

Thus, the physical address is Frame 2 + 300 = 2300.

Advantages of Paging:

 No External Fragmentation: Paging avoids the problem of external fragmentation


because memory is allocated in fixed-size blocks (pages).
 Efficient Memory Utilization: It ensures that memory is allocated efficiently, even if
processes are of different sizes.

Disadvantages of Paging:

 Internal Fragmentation: Although paging eliminates external fragmentation, it may


still cause internal fragmentation within the last page if a process doesn't perfectly fill
the page.
 Overhead: The use of page tables introduces extra memory overhead and time spent
in translation.

Translation Look-Aside Buffer (TLB)

A Translation Look-Aside Buffer (TLB) is a hardware cache that stores recent translations
of virtual memory addresses to physical memory addresses in a paging system. The TLB
improves the speed of address translation by reducing the need to access the page table
frequently.

How TLB Works:

1. Address Translation: When the CPU generates a virtual address, it is divided into a
page number and an offset.
2. Check TLB: The page number is checked in the TLB to see if it has already been
mapped to a frame in physical memory.

o If the page number is found in the TLB (TLB hit), the corresponding frame
number is retrieved quickly, and the physical address is generated by
combining the frame number with the offset.
o If the page number is not found in the TLB (TLB miss), the page table is
accessed to get the frame number, which is then added to the TLB for future
use.

3. Replacement Policy: Since the TLB is a cache with limited entries, an appropriate
replacement policy (e.g., LRU (Least Recently Used)) is used when new entries need
to be cached.

Advantages of TLB:

 Faster Address Translation: By caching the most recent translations, the TLB
reduces the need to look up the page table every time, which improves the
performance of memory accesses.
 Reduced Latency: TLB hits can reduce the latency of memory access significantly,
especially in systems with high memory access rates.

Disadvantages of TLB:

 Overhead on Misses: If the required page number is not in the TLB (TLB miss), the
system must access the page table, which takes more time.
 Limited Cache Size: The TLB has limited space, and the effectiveness depends on
the size and replacement policy.
FACULTY IN-CHARGE HOD

You might also like