Os Cycle Test 2 Answer Key
Os Cycle Test 2 Answer Key
ANSWER KEY
PART-A(10*2=20)
User-level threads are unknown by the kernel, whereas the kernel is aware of kernel
threads.
User threads are scheduled by the thread library and the kernel schedules kernel
threads.
Kernel threads need not be associated with a process whereas every user thread
belongs to a process the kernel can schedule another thread in the application for execution.
2. What is race condition?
When several process access and manipulate same data concurrently, then the
outcome of the execution depends on particular order in which the access takes place is called
race condition. To avoid race condition, only one process at a time can manipulate the shared
variable.
3. List the methods for deadlock avoidance?
1. Banker's Algorithm
2. Resource Allocation Graph (RAG)
3. Wait-For Graph
4.Define monitors.
Monitors are a high-level synchronization construct used in concurrent programming
to manage access to shared resources in a way that prevents race conditions and ensures
mutual exclusion. A monitor provides a way to encapsulate shared data, operations, and
synchronization mechanisms, allowing processes to safely access and modify the shared data.
9. What is overlay?
Overlay is a memory management technique used in computing to allow programs
that are larger than the available physical memory to run on systems with limited memory
resources. It involves dividing a program into smaller, more manageable pieces called
overlays, and loading only the necessary parts of the program into memory at any given time.
This helps optimize memory usage by swapping pieces of the program in and out of memory
as needed.
10.What is the purpose of paging the page table?
The purpose of paging the page table is to handle large address spaces efficiently,
especially in systems with virtual memory. In systems that use paging for memory
management, paging the page table helps address some of the limitations and inefficiencies
related to large, fixed-size page tables.
PART-A(3*10=30)
ANSWER ANY THREE QUESTIONS
Goal:
Minimize busy waiting, meaning a process should not continuously check the condition in a
tight loop.
Ensure that the semaphore operations are atomic, meaning they can work correctly in a
multiprocessor system.
Semaphore Operations:
Test-and-Set:
Global Variables:
semaphore: Integer initialized to some value (the number of available resources or permits).
lock: Binary semaphore (usually 0 or 1) used to ensure mutual exclusion for accessing the
semaphore.
Test-and-Set Instruction:
function test_and_set(variable):
old_value = variable
variable = 1 // Set variable to 1
return old_value
wait(semaphore) Operation:
1. Acquire the lock: Use the test-and-set instruction to ensure mutual exclusion when
modifying the semaphore.
2. Check the semaphore value: If the semaphore is greater than 0, decrement it and release the
lock.
3. Block the process if necessary: If the semaphore is 0, the process should wait until it is
signaled by another process.
function wait(semaphore):
while test_and_set(lock) == 1:
if semaphore > 0:
semaphore = semaphore - 1
else:
lock = 0
signal(semaphore) Operation:
1. Acquire the lock: Again, use the test-and-set instruction to ensure mutual exclusion.
2. Increment the semaphore: When a process signals, it increments the semaphore.
3. Unblock a waiting process: If there are any processes waiting, unblock one of them (if
needed).
function signal(semaphore):
while test_and_set(lock) == 1:
semaphore = semaphore + 1
unblock_process()
lock = 0
Blocking and Unblocking: Processes that cannot proceed (because the semaphore is
0) are blocked and placed in a waiting queue. The blocking mechanism is dependent
on the specific system or operating environment.
The wait() operation contains a busy-wait loop while attempting to acquire the lock.
However, once the lock is obtained, the semaphore value is checked and modified without
busy-waiting. If the semaphore is 0, the process is blocked (waiting on an event or signal),
which reduces busy waiting.
The signal() operation also contains a busy-wait loop for acquiring the lock but avoids busy
waiting during the semaphore modification. If there are blocked processes, one of them will
be unblocked, allowing for efficient resource utilization.
Solution
Logical Address:
1. Segment Number (S): Identifies which segment is being referred to (e.g., code, data, or
stack).
2. Offset (O): The offset within that segment, specifying the exact position in the segment.
Physical Address:
The physical address is the actual address in physical memory where the data is located.
Segment Table: Contains the base address (starting point) and limit (length) for each
segment. The base address is the starting address in physical memory, while the limit
specifies the maximum size of the segment.
Extract the Segment Number (S): The segment number is used as an index into the
segment table to locate the corresponding segment descriptor.
Look Up the Base Address and Limit: Using the segment number, retrieve the base
address and limit for that segment from the segment table.
Check the Offset: Ensure that the offset (O) is within the segment's limit. If the offset
is greater than the limit, a segmentation fault occurs because the program is trying to
access memory outside its allowed range.
Calculate the Physical Address: The physical address is calculated by adding the
offset (O) to the base address of the segment:
The result gives the actual location in physical memory where the data resides.
Example:
S = 2 (Segment number)
O = 1000 (Offset)
Thus, the logical address (2, 1000) maps to the physical address 6000.
Key Points:
Segmentation divides memory into segments, which can have different sizes.
The logical address consists of a segment number and an offset.
The segment table maps the segment number to a base address and a limit.
The offset is checked to ensure it's within the segment's boundaries.
The final physical address is calculated by adding the offset to the base address of the
segment.
This system allows for more flexible memory allocation compared to paging and is
particularly useful for organizing programs into logical units like code, data, and stack.
14.Explain about given memory management techniques. (i) Partitioned allocation (ii)
Paging and translation look-aside buffer
Fixed Partitioning:
Example:
Dynamic Partitioning:
Example:
(ii) Paging
Paging is a memory management technique that avoids both internal and external
fragmentation by dividing the physical memory and logical memory into fixed-sized blocks
called pages (in the logical memory) and frames (in the physical memory).
Key Concepts:
Paging Process:
Example of Paging:
Let’s assume:
The logical address consists of 10 bits for the page number and 12 bits for the offset
(so the page size is 212=4 KB2^{12} = 4 \, \text{KB}212=4KB).
The physical address is split into 14 bits for the frame number and 12 bits for the
offset.
Advantages of Paging:
Disadvantages of Paging:
A Translation Look-Aside Buffer (TLB) is a hardware cache that stores recent translations
of virtual memory addresses to physical memory addresses in a paging system. The TLB
improves the speed of address translation by reducing the need to access the page table
frequently.
1. Address Translation: When the CPU generates a virtual address, it is divided into a
page number and an offset.
2. Check TLB: The page number is checked in the TLB to see if it has already been
mapped to a frame in physical memory.
o If the page number is found in the TLB (TLB hit), the corresponding frame
number is retrieved quickly, and the physical address is generated by
combining the frame number with the offset.
o If the page number is not found in the TLB (TLB miss), the page table is
accessed to get the frame number, which is then added to the TLB for future
use.
3. Replacement Policy: Since the TLB is a cache with limited entries, an appropriate
replacement policy (e.g., LRU (Least Recently Used)) is used when new entries need
to be cached.
Advantages of TLB:
Faster Address Translation: By caching the most recent translations, the TLB
reduces the need to look up the page table every time, which improves the
performance of memory accesses.
Reduced Latency: TLB hits can reduce the latency of memory access significantly,
especially in systems with high memory access rates.
Disadvantages of TLB:
Overhead on Misses: If the required page number is not in the TLB (TLB miss), the
system must access the page table, which takes more time.
Limited Cache Size: The TLB has limited space, and the effectiveness depends on
the size and replacement policy.
FACULTY IN-CHARGE HOD