0% found this document useful (0 votes)
15 views37 pages

Operating System Pre Que Paper

The document covers various concepts related to operating systems, including I/O bound processes, the fork() system call, bootstrap loaders, context switches, and mutual exclusion. It discusses process management, scheduling algorithms, fragmentation, and memory management, including virtual memory and semaphores. Additionally, it addresses the critical section problem, advantages of multithreading, and page replacement algorithms, providing a comprehensive overview of key operating system principles.

Uploaded by

Rohan Kate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views37 pages

Operating System Pre Que Paper

The document covers various concepts related to operating systems, including I/O bound processes, the fork() system call, bootstrap loaders, context switches, and mutual exclusion. It discusses process management, scheduling algorithms, fragmentation, and memory management, including virtual memory and semaphores. Additionally, it addresses the critical section problem, advantages of multithreading, and page replacement algorithms, providing a comprehensive overview of key operating system principles.

Uploaded by

Rohan Kate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

a) I/O bound process:

An I/O bound process is one where the system spends more time performing Input/Output
(I/O) operations (such as reading from or writing to disk, keyboard, or network) than
executing computations. These processes are typically limited by the speed of I/O devices
rather than the CPU.

b) Purpose of the fork() system call:


The fork() system call is used to create a new process by duplicating the calling process.
The newly created process (child process) is a copy of the parent process, but with a different
process ID. It allows for multitasking and process creation.

c) Bootstrap Loader:
A Bootstrap Loader, also known as the bootloader, is a small program that is responsible for
loading the operating system into memory during the system startup (boot process). It is the
first program that runs when a computer is powered on and helps initialize hardware
components and load the operating system.

d) Context switch:
A context switch refers to the process of saving the state of a currently running process
(including register values, program counter, and other data) and restoring the state of another
process. This is done by the operating system when switching between tasks or processes in a
multitasking environment.

e) Priority scheduling suffers from starvation:


True.
In priority scheduling, processes with higher priority are executed before those with lower
priority. If there are always high-priority processes arriving, low-priority processes may
never get executed, leading to starvation, where lower-priority processes are indefinitely
delayed.

f) Mutual Exclusion:
Mutual exclusion is a property of concurrency control in which multiple processes or threads
are prevented from simultaneously accessing a shared resource or critical section. Only one
process can access the resource at a time, ensuring no conflict or data inconsistency.

g) Race condition:
A race condition occurs in a multi-threaded or multi-process environment when the outcome
of a process depends on the sequence or timing of uncontrollable events, leading to
unpredictable behavior. This happens when two or more processes or threads access shared
data concurrently and try to change it, causing inconsistent results.

h) Limit register:
The limit register is a hardware register used to define the upper boundary or limit of a
process's address space in memory. It is used in conjunction with the base register to ensure
that a process cannot access memory outside its allocated space, helping with memory
protection.

i) Frame:
A frame is a fixed-size block of memory used in virtual memory systems. It represents a unit
of memory that can be mapped to a page of virtual memory. The physical memory is divided
into frames, and virtual memory is divided into pages, which are mapped to the frames.

j) Advantages of open-source operating system:

• Cost: Open-source operating systems are typically free, which can significantly
reduce costs for users and organizations.
• Customizability: Users can modify and tailor the OS to their specific needs and
requirements.
• Security: Open-source software allows anyone to inspect and improve the code,
leading to quicker identification and resolution of security vulnerabilities.
• Community support: Open-source systems have large communities that contribute
to troubleshooting, updates, and improvements.
• Transparency: Since the source code is open, users can understand exactly how the
system works and identify potential issues or risks.

4o mini

a) What is the critical section problem?

The critical section problem refers to a situation in concurrent programming where multiple
processes or threads access a shared resource (such as memory or a file) simultaneously, and
at least one of them modifies the resource. This leads to data inconsistency, errors, and
unpredictable behavior. The goal of solving the critical section problem is to design a
mechanism (often called a mutex or semaphore) that ensures only one process or thread can
access the critical section at a time, preventing conflicts and ensuring data integrity.

b) What is the role of the dispatcher?

The dispatcher is a part of the operating system that is responsible for giving control of the
CPU to a process selected by the scheduler. The dispatcher is responsible for the context
switching between processes. This involves saving the state of the currently running process
and loading the state of the next process to be executed. The dispatcher ensures that processes
are executed in the correct order and time slots, allowing multitasking.

c) Write the benefits of virtual memory?

The benefits of virtual memory include:

1. Isolation of Processes: Virtual memory ensures that each process has its own address
space, which prevents one process from accessing the memory of another process.
This improves security and stability.
2. Efficient Use of RAM: Virtual memory allows the system to use hard disk space as if
it were additional RAM, making it possible to run larger programs or multiple
programs simultaneously even if physical memory is limited.
3. Simplifies Memory Management: Virtual memory allows the operating system to
manage memory more easily by using paging or segmentation techniques. This helps
in efficient memory allocation and deallocation.
4. Program Execution Flexibility: It enables programs to run even if they do not fit
entirely in RAM, by swapping portions of the program in and out of memory as
needed.

d) Explain any two advantages of multithreading?

Two advantages of multithreading are:

1. Improved Performance and Responsiveness: Multithreading allows multiple tasks


to be executed concurrently, leading to better utilization of CPU resources. This
improves performance and responsiveness, especially for applications that need to
handle multiple tasks, like web browsers or games.
2. Resource Sharing: Threads within the same process share the same memory space,
which allows for efficient communication and data sharing between threads. This is
more efficient than processes that require inter-process communication (IPC)
mechanisms.

e) Write the system calls under the category of process management?

The system calls under the category of process management include:

1. fork(): Used to create a new process by duplicating the current process.


2. exec(): Replaces the current process image with a new process image.
3. wait(): Causes a process to wait until one of its child processes terminates.
4. exit(): Terminates the current process and returns an exit status to the parent process.
5. getpid(): Returns the process ID (PID) of the calling process.
6. getppid(): Returns the parent process ID (PPID) of the calling process.
7. kill(): Sends a signal to a process, which can terminate or interrupt the process.

Q3) Attempt any TWO of the following. [2×4=8]

a) What is process? Explain in different types of process states?

A process is a program in execution. It is an active entity that contains the program code and
its current activity. A process is the fundamental unit of execution in a computer system, and
it has its own address space, code, data, and other resources.

Types of Process States:

1. New: A process is being created.


2. Ready: The process is in memory and is waiting to be assigned to a CPU for execution.
3. Running: The process is currently being executed by the CPU.
4. Waiting (Blocked): The process is waiting for some event (e.g., I/O completion) to occur.
5. Terminated (Exit): The process has finished execution or was aborted.

The process transitions between these states based on various events like CPU availability,
I/O completion, etc.
b) What is fragmentation? Explain the types of Fragmentation.

Fragmentation is a phenomenon that occurs in memory management when the memory is


allocated and deallocated in small chunks, leading to wasted space in memory. There are two
main types of fragmentation:

1. External Fragmentation:
o Occurs when free memory blocks are scattered in various locations in the system.
o Over time, as processes are allocated and deallocated, free memory becomes
fragmented into smaller blocks, making it difficult to allocate larger contiguous
memory blocks.
o This can result in memory being available but not being usable for larger requests.
2. Internal Fragmentation:
o Happens when memory is allocated in fixed-sized blocks, but the process does not
use the entire block.
o The unused portion of the allocated block results in wasted space.
o For example, if a block of 4 KB is allocated but only 3 KB is used, the remaining 1 KB
is wasted.

c) Consider the following set of processes with the length of CPU burst time and arrival time given in
milliseconds.

Process | Burst time | Arrival time


P1 | 4 | 2
P2 | 6 | 0
P3 | 2 | 1

Illustrate the execution of these processes using the Round Robin (RR) CPU scheduling
algorithm, considering the time quantum is 3. Calculate average waiting time and average
turn-around time. Also, draw the Gantt chart.

Round Robin Scheduling:

The Round Robin (RR) scheduling algorithm assigns each process a fixed time slice or
quantum. When a process's time quantum expires, it is moved to the end of the ready queue,
and the CPU is assigned to the next process in line.

Time Quantum = 3 ms

Execution Order:

1. At time 0, P2 arrives and starts execution. It runs for 3 ms (time quantum), so it will
complete 3 ms of its total burst time. Now, P2 has 3 ms remaining.
2. At time 1, P3 arrives. It runs next for 2 ms, completing its entire burst.
3. At time 2, P1 arrives. It runs for 3 ms, completing 3 ms of its burst time.
4. After that, P2 resumes execution and completes its remaining 3 ms.
5. Finally, P1 completes its remaining 1 ms.

Gantt Chart:
Copy code
| P2 | P3 | P1 | P2 | P1 |
0 3 5 8 11 12

Calculating Waiting Time (WT) and Turnaround Time (TAT):

• Waiting Time (WT) = Turnaround Time (TAT) - Burst Time.


• Turnaround Time (TAT) = Completion Time - Arrival Time.

For each process:

• P1: Completion time = 12, Arrival time = 2


o TAT = 12 - 2 = 10
o WT = TAT - Burst Time = 10 - 4 = 6
• P2: Completion time = 11, Arrival time = 0
o TAT = 11 - 0 = 11
o WT = TAT - Burst Time = 11 - 6 = 5
• P3: Completion time = 5, Arrival time = 1
o TAT = 5 - 1 = 4
o WT = TAT - Burst Time = 4 - 2 = 2

Average Waiting Time (AWT) = (WT_P1 + WT_P2 + WT_P3) / 3 = (6 + 5 + 2) / 3 = 4.33


ms

Average Turnaround Time (ATAT) = (TAT_P1 + TAT_P2 + TAT_P3) / 3 = (10 + 11 +


4) / 3 = 8.33 ms

Q4) Attempt any TWO of the following. [2×4=8]

a) What is semaphore? Explain the dining philosopher’s problem.

A semaphore is a synchronization tool used to manage concurrent processes. It is a variable


or abstract data type used to control access to a shared resource by multiple processes in a
concurrent system, like an operating system.

There are two types of semaphores:

1. Counting Semaphore: Can take any non-negative integer value.


2. Binary Semaphore (Mutex): Takes only values 0 or 1.

Dining Philosopher's Problem:

This is a classic synchronization problem in computer science where a group of philosophers


sit at a round table with a fork placed between each pair of adjacent philosophers. The
philosophers alternate between thinking and eating. To eat, a philosopher needs two forks,
one on the left and one on the right. The challenge is to design a solution that avoids deadlock
and ensures that all philosophers get a chance to eat.
The problem is solved using semaphores to control the access to forks and avoid situations
where philosophers are waiting indefinitely (deadlock) or some philosophers are not allowed
to eat (starvation).

b) Which are the different types of schedulers? Explain the working of short-term scheduler?

Types of Schedulers:

1. Long-Term Scheduler (Job Scheduler): Decides which processes are admitted into the ready
queue for execution. It controls the degree of multiprogramming, i.e., the number of
processes in the ready queue.
2. Short-Term Scheduler (CPU Scheduler): Selects which process from the ready queue will be
executed next by the CPU. It runs frequently (milliseconds) and determines which process
should be given CPU time next.
3. Medium-Term Scheduler: It temporarily removes processes from the memory (swapping
out) and later brings them back (swapping in). It manages the degree of multi-programming
by swapping processes in and out of the main memory.

Short-Term Scheduler: The short-term scheduler is responsible for making decisions about
which process will run next, selecting a process from the ready queue and allocating the CPU
to it. It uses scheduling algorithms such as First-Come, First-Served (FCFS), Round Robin
(RR), Shortest Job Next (SJN), etc.

c) Consider the following page replacement string:

String: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 3
How many page faults would occur for the following page replacement algorithms assuming
three frames?

i) FIFO (First-In-First-Out)

• The pages are loaded into frames in the order they arrive.
• When a new page arrives and all frames are full, the oldest page is replaced.

Page sequence: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 3

Steps:

1. 1 → (Page fault)
2. 2 → (Page fault)
3. 3 → (Page fault)
4. 4 → (Page fault, replace 1)
5. 2 → (No page fault, already in memory)
6. 1 → (Page fault, replace 3)
7. 5 → (Page fault, replace 2)
8. 6 → (Page fault, replace 4)
9. 2 → (Page fault, replace 1)
10. 1 → (Page fault, replace 5)
11. 3 → (Page fault, replace 6)
Total page faults for FIFO = 9.

ii) LRU (Least Recently Used)

• The page that has not been used for the longest period is replaced.

Steps:

1. 1 → (Page fault)
2. 2 → (Page fault)
3. 3 → (Page fault)
4. 4 → (Page fault, replace 1)
5. 2 → (No page fault, already in memory)
6. 1 → (Page fault, replace 3)
7. 5 → (Page fault, replace 4)
8. 6 → (Page fault, replace 2)
9. 2 → (Page fault, replace 1)
10. 1 → (Page fault, replace 5)
11. 3 → (Page fault, replace 6)

Total page faults for LRU = 9.

Q5) Attempt any ONE of the following. [1×3=3]

a) Write a short note on MMU (Memory Management Unit).

The Memory Management Unit (MMU) is a hardware component that plays a crucial role
in the management and protection of memory in a computer system. It is responsible for
translating virtual memory addresses into physical memory addresses and ensuring that
processes can access only their allocated memory regions.

Key functions of the MMU include:

1. Address Translation: The MMU translates virtual addresses generated by a program


(virtual memory) into physical addresses in the system's RAM (physical memory).
This translation is crucial for virtual memory systems where programs are provided
with an abstract view of memory that may not match the actual hardware
configuration.
2. Memory Protection: It ensures that processes do not access memory that they are not
authorized to use, thus preventing one process from corrupting the memory of
another. The MMU enforces access control based on memory protection bits.
3. Cache Management: The MMU may also interact with the CPU cache and control
memory hierarchy, ensuring that frequently accessed data is kept in fast-access
memory.
4. Page Table Management: The MMU uses data structures like page tables to keep
track of the mapping between virtual addresses and physical addresses. This allows it
to handle the page-based memory management system efficiently.
5. Support for Virtual Memory: The MMU enables the use of virtual memory by
translating virtual addresses to physical addresses and managing swapping between
RAM and secondary storage (e.g., hard disk).

By providing these functions, the MMU enables features such as process isolation, efficient
memory usage, and support for virtual memory systems.

e) Explain the Layered structure of the operating system.

The Layered structure of an operating system organizes the system into hierarchical layers,
with each layer performing specific tasks. Each layer interacts only with its adjacent layers,
and the upper layers rely on the services provided by the lower layers. This modular approach
helps to organize complex systems, making them more manageable and maintainable.

A typical layered operating system architecture consists of the following layers:

1. Layer 0: Hardware Layer


The base layer consists of the physical hardware, such as the CPU, memory, disk, and
I/O devices. The hardware layer provides the fundamental resources that the operating
system utilizes.
2. Layer 1: Kernel Layer
The kernel is the core of the operating system. It is responsible for managing
hardware resources, process scheduling, memory management, device management,
and system calls. The kernel acts as an intermediary between the hardware and the
higher layers of the operating system.
3. Layer 2: System Call Interface (API)
This layer provides an interface between user applications and the kernel. It allows
programs to interact with the operating system using system calls, such as file
operations, process control, and input/output handling.
4. Layer 3: User Programs
This layer consists of user applications and programs that run on the operating system.
These programs rely on the services provided by the lower layers to perform tasks,
such as word processing, web browsing, etc.

Advantages of Layered Structure:

• Modularity: Each layer is designed to perform specific tasks, making the system easier to
understand, develop, and maintain.
• Isolation: Changes in one layer (e.g., hardware improvements or new system calls) can be
made without significantly affecting the other layers.
• Easier Debugging: Bugs or issues can be traced to specific layers, making debugging simpler.

However, the layered structure can introduce inefficiencies because of the overhead of
communication between layers, especially if strict separation between layers is enforced.

4o mini

Question paper 2
Q1) Attempt any EIGHT of the following (out of ten) : [8 × 1 = 8]

a) What is a shell?

• A shell is a command-line interface (CLI) that allows users to interact with the
operating system by typing commands. It acts as a mediator between the user and the
operating system, interpreting and executing the user's commands. Popular shells
include the Bourne shell (sh), Bash, and Zsh.

b) What is a thread?

• A thread is the smallest unit of a CPU's execution. It is part of a process that can run
independently, performing tasks concurrently with other threads. Multiple threads
within a process share the same resources, such as memory, but have their own
execution context.

c) List types of system calls.

• Types of system calls include:


1. Process Control: e.g., fork(), exit(), exec().
2. File Management: e.g., open(), read(), write(), close().
3. Device Management: e.g., ioctl(), read(), write().
4. Information Maintenance: e.g., getpid(), gettime().
5. Communication: e.g., pipe(), msgget(), shmget().

d) State the role of medium-term scheduler.

• The medium-term scheduler is responsible for swapping processes in and out of the
main memory (RAM). It controls the degree of multiprogramming by temporarily
removing processes from memory (swapping out) and later bringing them back
(swapping in). This helps optimize the use of available memory and CPU.

e) What is CPU - I/O burst cycle?

• The CPU-I/O burst cycle refers to the alternating periods during which a process
executes on the CPU (CPU burst) and performs I/O operations (I/O burst). The
process moves between these two states, with CPU bursts requiring CPU processing
time and I/O bursts involving waiting for input/output operations to complete.

f) What is a race condition?

• A race condition occurs when multiple processes or threads access shared resources
(like memory or files) concurrently, and the outcome depends on the order of
execution. If not properly synchronized, it can lead to unpredictable or incorrect
behavior.

g) Define response time.

• Response time is the time taken from when a user submits a request to when the
system responds. It includes the time for the system to process the request and deliver
the output. For interactive systems, response time is crucial to ensure a good user
experience.

h) Define Semaphore.

• A semaphore is a synchronization tool used to manage access to shared resources in


concurrent programming. It uses a counter to control access. A binary semaphore (or
mutex) can be used for mutual exclusion, while a counting semaphore allows
managing multiple instances of a resource.

i) What is a page table?

• A page table is a data structure used in virtual memory systems to map virtual
addresses to physical addresses. Each entry in the page table corresponds to a page in
memory, helping the operating system to manage the mapping between virtual and
physical memory locations.

j) What is segmentation?

• Segmentation is a memory management technique where a process is divided into


variable-sized segments, such as code, data, and stack segments. Each segment has a
base address and a length, allowing the operating system to handle memory more
efficiently than with paging.

Q2) Attempt any four of the following : (out of five) [4 × 2 = 8]

a) What is an operating system? List objectives of operating system.

• Operating System (OS): An operating system is system software that manages


hardware and software resources on a computer. It provides a user interface, manages
processes, memory, and hardware, and facilitates communication between software
and hardware.

Objectives of an Operating System:

1. Resource Management: Efficient management of hardware resources such as


CPU, memory, and I/O devices.
2. Process Management: Scheduling and execution of processes, ensuring fair
CPU time allocation.
3. Memory Management: Allocation and deallocation of memory to processes.
4. Security and Protection: Safeguarding data and system resources from
unauthorized access.
5. User Interface: Providing an interface for users to interact with the system.
6. File Management: Managing the storage, retrieval, and organization of files
on storage devices.

b) Define critical section problem. Explain in detail.


• The Critical Section Problem occurs when multiple processes need to access a
shared resource (e.g., a variable, file, or memory segment), and this access must be
controlled to avoid inconsistencies. The critical section is the part of the program
where shared resources are accessed. The issue arises if more than one process is
allowed to enter the critical section at the same time.

Conditions to Solve the Critical Section Problem:

1. Mutual Exclusion: Only one process can be in the critical section at any
given time.
2. Progress: If no process is in the critical section and more than one process
wants to enter, the system must ensure that one process can proceed.
3. Bounded Waiting: A process must not be delayed indefinitely from entering
the critical section.

Solutions include using locks, semaphores, and monitors.

c) Compare LFU and MFU with two points.

• LFU (Least Frequently Used) and MFU (Most Frequently Used) are page
replacement algorithms used in operating systems for managing page faults in
memory.

Comparison:

1. Page Replacement Logic:


▪ LFU: Replaces the page that has been used the least number of times.
▪ MFU: Replaces the page that has been used the most number of times.
2. Efficiency:
▪ LFU: Tends to be better for workloads where some pages are used
significantly more frequently than others.
▪ MFU: Generally considered less efficient, as it assumes the most
frequently used pages are the least likely to be needed again.

d) What is the purpose of a scheduling algorithm?

• The purpose of a scheduling algorithm is to determine the order in which processes


are executed by the CPU. Scheduling algorithms aim to optimize performance by
minimizing waiting time, turnaround time, and maximizing CPU utilization. They
decide how processes share the CPU in a time-sharing system and ensure fair
distribution of CPU time among processes.

Q3) Attempt any two of the following: [2 × 4 = 8]

a) With the help of a diagram, describe process states.

A process in an operating system can be in one of the following states:

1. New: The process is being created.


2. Ready: The process is ready to execute but waiting for CPU time.
3. Running: The process is currently being executed on the CPU.
4. Waiting (Blocked): The process is waiting for some event to occur, like I/O completion.
5. Terminated (Exit): The process has finished execution.

Diagram of Process States:

sql
Copy code
+-----------+ +-----------+ +-----------+
| | | | | |
| New +---->+ Ready +---->+ Running |
| | | | | |
+-----------+ +-----+-----+ +-----+-----+
| |
v v
+-----------+ +-----------+
| | | |
| Waiting |<----+ Terminated|
| | | |
+-----------+ +-----------+

In the diagram:

• A process can move from New to Ready when it's prepared to execute.
• From Ready, it can transition to Running if it gets CPU time.
• A process can go from Running to Waiting if it needs I/O or other resources.
• After completion, a process enters the Terminated state.

b) Consider the following set of processes and burst times. Illustrate execution of processes using FCFS
and preemptive SJF CPU scheduling algorithm and calculate turnaround time, waiting time, average
turnaround time, average waiting time.

Processes:

css
Copy code
Process Burst Time (ms) Arrival Time (ms)
P0 5 1
P1 3 0
P2 2 2
P3 4 3
P4 8 2

First-Come, First-Served (FCFS) Scheduling:

• Order of Execution: P1 → P0 → P2 → P4 → P3
• The processes execute in the order of their arrival time, and we will calculate the
Turnaround Time (TAT) and Waiting Time (WT) based on this order.

Gantt Chart for FCFS:

Copy code
| P1 | P0 | P2 | P4 | P3 |
0 3 8 10 18 22

Turnaround Time (TAT) = Completion Time - Arrival Time


Waiting Time (WT) = Turnaround Time - Burst Time

For each process:

• P1: TAT = 3 - 0 = 3, WT = 3 - 3 = 0
• P0: TAT = 8 - 1 = 7, WT = 7 - 5 = 2
• P2: TAT = 10 - 2 = 8, WT = 8 - 2 = 6
• P4: TAT = 18 - 2 = 16, WT = 16 - 8 = 8
• P3: TAT = 22 - 3 = 19, WT = 19 - 4 = 15

Average Turnaround Time: (3 + 7 + 8 + 16 + 19) / 5 = 53 / 5 = 10.6 ms


Average Waiting Time: (0 + 2 + 6 + 8 + 15) / 5 = 31 / 5 = 6.2 ms

Preemptive Shortest Job First (SJF) Scheduling:

In Preemptive SJF, the process with the shortest burst time is selected next, even if a new
process arrives during execution.

Order of Execution: P1 → P2 → P3 → P0 → P4

• P1 executes first (3 ms), then P2 (2 ms), followed by P3 (4 ms), then P0 (5 ms), and finally P4
(8 ms).

Gantt Chart for Preemptive SJF:

Copy code
| P1 | P2 | P3 | P0 | P4 |
0 3 5 9 14 22

Turnaround Time (TAT) = Completion Time - Arrival Time


Waiting Time (WT) = Turnaround Time - Burst Time

For each process:

• P1: TAT = 3 - 0 = 3, WT = 3 - 3 = 0
• P2: TAT = 5 - 2 = 3, WT = 3 - 2 = 1
• P3: TAT = 9 - 3 = 6, WT = 6 - 4 = 2
• P0: TAT = 14 - 1 = 13, WT = 13 - 5 = 8
• P4: TAT = 22 - 2 = 20, WT = 20 - 8 = 12

Average Turnaround Time: (3 + 3 + 6 + 13 + 20) / 5 = 45 / 5 = 9 ms


Average Waiting Time: (0 + 1 + 2 + 8 + 12) / 5 = 23 / 5 = 4.6 ms
c) What is fragmentation? Explain with all its types.

Fragmentation refers to the inefficient use of memory in a computer system, where free
memory is broken into small blocks and scattered throughout the memory, resulting in wasted
space. Fragmentation is of two types:

1. External Fragmentation:
o This occurs when free memory is scattered in small blocks, but the total free
memory is enough to satisfy a request. However, no single contiguous block of free
memory is large enough to allocate the memory required by a process.
o Example: Suppose there are several small free blocks of memory, but no large block
is available, so a process cannot be allocated even though there is enough free
memory overall.
2. Internal Fragmentation:
o This happens when memory is allocated in fixed-sized blocks, and the process does
not fully use the entire block. The unused space within an allocated block is wasted.
o Example: A process is allocated a block of 100 KB, but it only needs 80 KB. The
remaining 20 KB is wasted and cannot be used by other processes.

Q4) Attempt any two of the following: [2 × 4 = 8]

a) Describe PCB with all its fields.

The Process Control Block (PCB) is a data structure that contains information about a
process in the operating system. It is used by the OS to manage and control processes during
their execution.

Fields in PCB:

1. Process ID (PID): Unique identifier for each process.


2. Process State: The current state of the process (new, ready, running, waiting, terminated).
3. Program Counter (PC): Contains the address of the next instruction to be executed.
4. CPU Registers: Includes various CPU registers (accumulator, general-purpose registers, etc.)
to store the current state of the process.
5. Memory Management Information: Information related to memory allocation such as base
and limit registers or page tables.
6. Scheduling Information: Contains information needed by the scheduler, such as process
priority, pointers to other PCBs in the ready queue, etc.
7. Accounting Information: Includes the process’s CPU usage, time limits, and other statistics.
8. I/O Status Information: Information about I/O devices used by the process (e.g., which
devices are assigned to it).
9. Open Files List: List of all files that the process is currently using.

b) Which three requirements must be satisfied while designing a solution to the critical section
problem? Explain each in detail.

A solution to the critical section problem must satisfy the following three requirements:
1. Mutual Exclusion:
o Only one process can be in its critical section at any given time. This ensures that the
shared resource is not accessed by more than one process simultaneously,
preventing data inconsistency.
2. Progress:
o If no process is in its critical section and one or more processes wish to enter, then
the selection of the next process to enter the critical section must not be delayed
indefinitely. This means that there should be no unnecessary delay or starvation.
3. Bounded Waiting:
o A process must have a limit on the number of times other processes are allowed to
enter their critical section before the process itself is granted access to the critical
section. This ensures that no process waits indefinitely (starvation), and every
process gets a fair chance.

c) Consider the following reference string: 1,2,3,4,2,1,5,6,2,1,3. Assume 3 frames. Find the number of
page faults according to FIFO, OPT page replacement algorithms.

Reference String: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 3
Frames: 3

**FIFO

4o mini

Q5) Attempt any one of the following: [1 × 3 = 3]

a) Describe the term distributed operating system. State its advantages and disadvantages.

A Distributed Operating System (DOS) is an operating system that manages a collection of


independent computers and makes them appear as a single unified system to the users. The
main goal of a distributed OS is to provide transparency, meaning that users do not need to
know where resources are located or how they are being managed.

In a distributed system, resources such as processors, memory, and I/O devices are distributed
across multiple machines that communicate over a network. The operating system provides
mechanisms to enable the sharing of resources, task coordination, and communication
between these machines.

Advantages of Distributed Operating System:

1. Resource Sharing: Distributed systems enable resources (like printers, files, and memory) to
be shared across all machines in the network, improving efficiency.
2. Scalability: Additional machines can be added to the system without significant changes to
the system. This allows the system to scale easily to handle more users or larger tasks.
3. Fault Tolerance: Since the system relies on multiple machines, the failure of one machine
does not result in the failure of the entire system. Tasks can be redistributed to other
machines.
4. Load Balancing: Tasks can be dynamically assigned to different machines in the network to
balance the load and ensure efficient utilization of resources.
5. Transparency: The distributed system can provide transparency to the user, meaning users
do not need to know the physical location of resources or how they are being managed.

Disadvantages of Distributed Operating System:

1. Complexity: Managing a distributed system is more complex than a single machine system,
as it involves dealing with network communication, synchronization, and consistency issues.
2. Security: Since resources are distributed over a network, there are more potential security
risks, and maintaining consistent security across multiple machines can be challenging.
3. Network Dependency: The performance of the system is dependent on the network, and
network failures can lead to communication breakdowns and reduced system performance.
4. Data Consistency: Keeping data consistent across multiple machines can be a challenge.
Special mechanisms are required to ensure that updates to shared resources are
synchronized properly.
5. Overhead: There is added overhead due to the need for communication between machines
and synchronization between processes, which can affect the system's overall performance.

b) With the help of a diagram, describe swapping.

Swapping is a memory management technique in which processes are temporarily moved


between the main memory (RAM) and secondary storage (such as a disk). The operating
system swaps out a process that is not currently being executed to make room for a process
that needs to be executed.

Swapping is particularly useful in systems with limited physical memory, as it allows


processes to be executed in parts. When a process is swapped out of memory, it is placed in a
special area of secondary storage (often called a swap space or swap file) and can be swapped
back into memory when needed.

Steps in Swapping:

1. When memory is full, the OS selects a process that is in memory but not actively being used
(also known as a "victim").
2. The selected process is swapped out to the disk (secondary storage).
3. The process that needs to be executed is swapped into the free space in memory.
4. When the swapped-out process needs to resume, it is swapped back into memory, replacing
a process that is no longer needed or active.

Diagram of Swapping:

scss
Copy code
Memory (RAM) Disk (Swap Space)
+---------------------+ <-> +-----------------------+
| Process A | | Process A (swapped) |
| (Active) | +-----------------------+
+---------------------+ +-----------------------+
| Process B | <-> | Process B |
| (Swapped) | | (swapped to disk) |
+---------------------+ +-----------------------+
| Process C | | Process C (active) |
| (Active) | +-----------------------+
+---------------------+

In the diagram:

• Processes A and C are in memory and actively executing.


• Process B is swapped out and stored in the disk swap space.
• When required, process B can be swapped back into memory and another process can be
swapped out if necessary.

Advantages of Swapping:

• Efficient Memory Use: Allows the operating system to run larger programs than can fit in
memory by swapping out inactive programs.
• Multitasking: Helps in running multiple programs by keeping active processes in memory
and swapping out inactive ones.

Disadvantages of Swapping:

• Overhead: Swapping processes in and out of memory introduces overhead, especially if


swapping happens frequently.
• Disk I/O: The performance of the system may degrade if the system has to swap processes
too often, as disk access is much slower than memory access.

4o mini

Q1) Attempt any Eight of the following: [8 × 1 = 8]

a) What is a shell?

• A shell is a user interface that allows users to interact with the operating system by
typing commands. It acts as a command-line interface (CLI) or a graphical user
interface (GUI) that interprets and executes commands, running programs and scripts.

b) Define the I/O Bound process.

• An I/O Bound process is one that spends more time waiting for input/output
operations (such as disk reads/writes, or network communication) than performing
actual computation. These processes are limited by the speed of I/O devices.

c) Define the term semaphore.

• A semaphore is a synchronization primitive used to control access to a shared


resource in concurrent programming. It is used to prevent race conditions by signaling
and controlling the number of processes that can access a particular resource.

d) What is a thread library?

• A thread library is a collection of functions or routines that provide the necessary


mechanisms for creating, managing, and controlling threads in a multithreaded
application. It includes features such as creating threads, managing thread
synchronization, and scheduling.

e) What is synchronization?

• Synchronization refers to the coordination of concurrent processes or threads to


ensure that shared resources are accessed in a safe manner. It prevents issues such as
race conditions and ensures that data is consistently managed when multiple threads
or processes are accessing the same resources.

f) What is physical address space?

• Physical address space refers to the range of memory addresses that are available to
the hardware of a computer system, specifically the RAM. It represents the actual
memory locations that the processor can access directly.

g) What is context switching?

• Context switching is the process of saving the state of a currently running process or
thread (its context) so that it can be paused, and the state of another process or thread
is loaded to resume its execution. This is necessary in multitasking operating systems
to switch between different processes or threads.

h) What is a page?

• A page is a fixed-length block of virtual memory in a system that uses paging for
memory management. The memory is divided into equal-sized pages, which are
mapped to physical memory blocks (frames), allowing efficient memory allocation
and management.

i) Define the term dispatcher?

• A dispatcher is a part of the operating system responsible for selecting the next
process to execute and allocating CPU time to it. It handles the transition of a process
from the ready state to the running state.

j) What is booting?

• Booting is the process of starting or restarting a computer system. It involves loading


the operating system into memory and preparing the system for use. The process
typically starts with the BIOS/UEFI performing hardware checks and then
transferring control to the bootloader, which loads the operating system.

Q2) Attempt Any Four of the following: [4 × 2 = 8]

a) Write advantages of distributed operating systems.

• Advantages of distributed operating systems include:


1. Resource sharing: Users can access resources across multiple machines in the
network, improving resource utilization.
2. Fault tolerance: The system can continue functioning even if one or more
components fail, as other parts of the system can take over.
3. Scalability: Distributed systems can scale easily by adding more machines or
resources to the network.
4. Improved performance: Tasks can be distributed across multiple processors,
leading to better overall system performance.
5. Load balancing: The workload can be balanced across multiple nodes to
prevent overload on any single node.

b) Compare preemptive and non-preemptive scheduling.

Feature Preemptive Scheduling Non-preemptive Scheduling


The CPU can be taken away Once a process starts running, it cannot
Definition
from a running process. be interrupted.
Allows processes to be No interruption until the process
Interruptions
interrupted and resumed. finishes.
Context Fewer context switches as processes run
More frequent context switches.
Switching to completion.
Provides fairer CPU allocation in May lead to process starvation if longer
Fairness
multi-user systems. processes dominate.
More complex, requires handling Simpler to implement as no preemption
Complexity
of preemptions. occurs.
Round Robin, Shortest Job First First-Come-First-Served (FCFS),
Example
(preemptive). Shortest Job First (non-preemptive).

c) List out functions of memory management.

• Functions of memory management include:


1. Memory allocation: Assigning memory blocks to processes or programs.
2. Memory deallocation: Releasing memory when it is no longer needed by
processes.
3. Virtual memory management: Extending physical memory using disk space.
4. Memory protection: Preventing processes from accessing memory regions
that they are not authorized to access.
5. Memory sharing: Allowing multiple processes to access the same memory
locations when required.
6. Memory swapping: Moving processes between main memory and disk
storage to optimize memory usage.

d) Types of Schedulers and Explanation of Short-Term Scheduler

Types of Schedulers:

1. Long-Term Scheduler (Job Scheduler):


o Responsible for selecting processes from the job pool (which is stored in the disk)
and loading them into the ready queue. It controls the degree of multiprogramming,
deciding how many processes can be in memory at one time. The frequency of job
scheduling is lower compared to short-term scheduling.
o The long-term scheduler determines the degree of multiprogramming by controlling
the number of processes in the ready queue.
2. Short-Term Scheduler (CPU Scheduler):
o The short-term scheduler selects from the ready queue which process should
execute next. It determines which of the ready processes gets to use the CPU,
switching between processes frequently. It is invoked at a very high frequency and is
responsible for making the CPU allocation decisions for each process.
o The short-term scheduler uses scheduling algorithms (e.g., Round Robin, Shortest
Job First, etc.) to manage the processes in the ready queue.
o Detailed Explanation:
▪ The short-term scheduler’s primary responsibility is to decide which process
in the ready queue will be allocated CPU time.
▪ It uses algorithms like Round Robin (RR), First-Come-First-Serve (FCFS),
Shortest Job First (SJF), and others to determine the next process.
▪ The short-term scheduler is invoked frequently because it manages
processes on a millisecond time scale, deciding which process to run next on
the CPU.
▪ Once a process is selected by the short-term scheduler, the CPU time is
assigned to that process, and it will be executed until either it terminates or
is preempted (depending on the scheduling algorithm in use).
3. Medium-Term Scheduler (Swapper):
o Responsible for swapping processes in and out of the main memory. It can remove a
process from memory (swap out) to the disk and later bring it back into memory
(swap in). This happens based on the system's memory management requirements.
o The medium-term scheduler controls the degree of multiprogramming dynamically.

e) Independent and Dependent Processes

1. Independent Processes:
o An independent process is one that does not require any interaction with other
processes. It performs its task in isolation and does not depend on any other process
for data or coordination.
o These processes can execute independently, meaning their execution does not
affect others, and they do not share resources unless specifically designed to do so.
o Example: A process that computes mathematical operations without needing input
or output from other processes.
2. Dependent Processes:
o A dependent process is one that relies on the resources or data produced by other
processes. These processes are often interdependent and might require
synchronization to ensure the correct sequence of execution.
o Examples include processes that share memory or need to communicate through
inter-process communication mechanisms like pipes, sockets, or message queues.
o These processes are often involved in scenarios such as producer-consumer
problems or client-server architectures.
Q3) Attempt Any Two of the Following:

a) Explain Multi-Threading Model in Detail

• Multi-threading Model:
o The multi-threading model refers to the design and implementation of multi-
threaded programs, where a single process can have multiple threads of execution
running concurrently.
o A thread is the smallest unit of a CPU's execution. In a multi-threaded application,
several threads may run in parallel or be interleaved on a single processor.

Types of Multi-threading Models:

3. Many-to-One Model:
▪ Multiple user-level threads are mapped to a single kernel thread.
▪ The operating system kernel is unaware of user-level threads, which are
managed by a user-level thread library.
▪ This model is simple and has low overhead but suffers from the limitation
that if one thread performs a blocking operation, all threads in the process
are blocked.
4. One-to-One Model:
▪ Each user-level thread maps to a kernel thread.
▪ This model allows for better performance and efficiency since the kernel is
aware of the threads.
▪ It can take full advantage of multi-core processors. However, it has higher
overhead, as the kernel must manage each thread separately.
5. Many-to-Many Model:
▪ A number of user-level threads are multiplexed onto a smaller or equal
number of kernel threads.
▪ This model allows the flexibility of user-level thread management
while still utilizing kernel threads effectively.
▪ This model can potentially reduce the overhead seen in the One-to-One
model.
o Benefits of Multi-threading:
▪ It allows for concurrent execution, making better use of CPU resources.
▪ Improves the responsiveness of applications, especially in interactive
systems.
▪ Threading can lead to faster execution times in applications that are CPU-
bound.

b) Three Requirements for Designing a Solution to the Critical Section Problem

• The critical section problem refers to the challenge of ensuring that multiple processes
or threads access shared resources in a way that prevents conflicts and ensures data
consistency.

The three key requirements that any solution to the critical section problem must
satisfy are:
1. Mutual Exclusion:
▪ Only one process or thread can be in the critical section at any given time.
This ensures that no two processes are simultaneously modifying shared
resources, which could lead to inconsistent results.
▪ This requirement prevents race conditions by ensuring that only one process
has access to shared data at a time.
2. Progress:
▪ If no process is currently in the critical section and some processes are
requesting to enter the critical section, the selection of the process to enter
the critical section should not be postponed indefinitely.
▪ This requirement ensures that the system remains responsive and doesn't
enter a state of deadlock, where no process can proceed.
3. Bounded Waiting:
▪ There must be a bound on the number of times that other processes can
enter the critical section before the waiting process is allowed to enter.
▪ This prevents starvation, where a process might never be allowed to enter
the critical section because other processes continuously preempt it.

Solutions like Peterson’s algorithm, semaphores, and mutexes are used to implement
these three conditions and manage access to critical sections in a process.

c) Preemptive SJF Scheduling Example

Given the set of processes with their Burst Time (B.T) and Arrival Time (A.T):

Process Burst Time (B.T) Arrival Time (A.T)

P1 5 1.5

P2 1 0

P3 2 2

P4 4 3

Preemptive SJF Scheduling:

In the preemptive Shortest Job First (SJF) scheduling algorithm, the CPU will always pick
the process with the shortest remaining burst time when a new process arrives. If two
processes have the same remaining burst time, the one that arrived earlier is selected.

Step-by-step calculation:

• At time 0: P2 arrives with burst time 1. It is selected for execution.


• At time 1: P2 finishes execution.
• At time 1.5: P1 arrives with burst time 5, but P3 arrives with burst time 2. So, P3 is selected
as it has a shorter burst time.
• At time 3.5: P3 finishes execution. Now, P4 with burst time 4 is the shortest remaining
process.
• At time 5.5: P4 finishes. P1 with burst time 5 continues to execute.

Waiting Time Calculation:

• Waiting Time = Turnaround Time - Burst Time


• For each process, we compute the turnaround time and then the waiting time.

Let's calculate the total waiting time and turnaround time:

Turnaround Time for each process:

• P2: 1 (start time 0) + 1 (execution time) = 1


• P3: 3.5 (end time) - 2 (arrival time) = 1.5
• P4: 5.5 (end time) - 3 (arrival time) = 2.5
• P1: 10 (end time) - 1.5 (arrival time) = 8.5

Waiting Time for each process:

• P2: 0 (since it started execution immediately after arrival)


• P3: 1.5 (waiting from 2 to 3.5)
• P4: 1.5 (waiting from 3 to 5.5)
• P1: 3.5 (waiting from 1.5 to 5)

Q4) Attempt Any Two of the following: [2 × 4 = 8]

a) Describe PCB with all its fields.

Process Control Block (PCB) is a data structure used by the operating system to store
information about a process. It is crucial for process management, as it contains all the
necessary data to manage the execution of a process.

Fields in a Process Control Block:

1. Process State: The current state of the process (e.g., running, waiting, ready, terminated).
2. Process ID (PID): A unique identifier for the process.
3. Program Counter (PC): The address of the next instruction to be executed.
4. CPU Registers: A set of registers (e.g., general-purpose, special-purpose) that hold the
process’s execution context.
5. Memory Management Information: Information about memory allocated to the process,
such as base and limit registers, page tables, etc.
6. Scheduling Information: Includes priority, scheduling queues, and other information related
to the process's scheduling.
7. Accounting Information: Information about the CPU time used, number of executed
instructions, or time limits.
8. I/O Status Information: The status of the process’s input/output operations, such as a list of
open files and devices used.
9. Inter-process Communication Information: Data related to communication between
processes, such as message queues, semaphores, etc.
The PCB is maintained in the OS during the process’s execution and is used during process
switching and context switching.

b) Explain bounded buffer problem in detail.

The Bounded Buffer Problem (also known as the Producer-Consumer problem) is a classic
synchronization problem where there are two types of processes: producers and consumers,
and they share a common buffer of fixed size. The goal is to ensure that producers do not
overwrite the buffer before the consumer reads the data, and the consumer does not read data
when the buffer is empty.

Problem Setup:

• Producer: The producer is responsible for placing items into the buffer. It can only place
items when there is space in the buffer.
• Consumer: The consumer is responsible for removing items from the buffer. It can only
consume items when the buffer is not empty.
• Buffer: The buffer has a finite size (usually represented as an array or queue), meaning that
it can only hold a limited number of items at a time.

Key Challenges:

1. Buffer Overflow: If the producer tries to add an item to the buffer when it's full, it must wait
until space becomes available.
2. Buffer Underflow: If the consumer tries to consume an item when the buffer is empty, it
must wait for the producer to add items.

Solution Using Semaphores: Semaphores or mutexes are often used to manage


synchronization between the producer and the consumer:

• A mutex (binary semaphore) is used to provide mutual exclusion to ensure that only one
process (either producer or consumer) accesses the buffer at a time.
• A full semaphore tracks the number of items in the buffer (initialized to 0).
• An empty semaphore tracks the remaining space in the buffer (initialized to the buffer size).

The producer and consumer use the semaphores to:

• The producer waits on the empty semaphore before adding an item and signals the full
semaphore.
• The consumer waits on the full semaphore before consuming an item and signals the empty
semaphore.

This ensures that no data is lost, and both processes are synchronized without any race
conditions.
c) Page Faults Using OPT and FIFO (No. of Frames = 3)

Reference String: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3

We are asked to calculate the number of page faults for two page replacement algorithms:
OPT (Optimal) and FIFO (First-In, First-Out) with 3 frames.

OPT (Optimal Page Replacement Algorithm):

The OPT algorithm replaces the page that will not be used for the longest period in the future.

• Step-by-Step Analysis (OPT):


1. Reference 1 → Page Fault (Load 1)
2. Reference 2 → Page Fault (Load 2)
3. Reference 3 → Page Fault (Load 3)
4. Reference 4 → Page Fault (Replace 1, Load 4)
5. Reference 2 → No Page Fault (2 is already in memory)
6. Reference 1 → Page Fault (Replace 3, Load 1)
7. Reference 5 → Page Fault (Replace 4, Load 5)
8. Reference 6 → Page Fault (Replace 2, Load 6)
9. Reference 2 → Page Fault (Replace 1, Load 2)
10. Reference 1 → Page Fault (Replace 5, Load 1)
11. Reference 2 → No Page Fault (2 is already in memory)
12. Reference 3 → Page Fault (Replace 6, Load 3)
13. Reference 7 → Page Fault (Replace 1, Load 7)
14. Reference 6 → Page Fault (Replace 2, Load 6)
15. Reference 3 → No Page Fault (3 is already in memory)
16. Reference 2 → Page Fault (Replace 7, Load 2)
17. Reference 1 → Page Fault (Replace 3, Load 1)
18. Reference 2 → No Page Fault (2 is already in memory)
19. Reference 3 → Page Fault (Replace 6, Load 3)
• Total Page Faults (OPT): 15

FIFO (First-In, First-Out) Algorithm:

The FIFO algorithm replaces the page that has been in memory the longest.

• Step-by-Step Analysis (FIFO):


1. Reference 1 → Page Fault (Load 1)
2. Reference 2 → Page Fault (Load 2)
3. Reference 3 → Page Fault (Load 3)
4. Reference 4 → Page Fault (Replace 1, Load 4)
5. Reference 2 → No Page Fault (2 is already in memory)
6. Reference 1 → Page Fault (Replace 3, Load 1)
7. Reference 5 → Page Fault (Replace 4, Load 5)
8. Reference 6 → Page Fault (Replace 2, Load 6)
9. Reference 2 → Page Fault (Replace 1, Load 2)
10. Reference 1 → Page Fault (Replace 5, Load 1)
11. Reference 2 → No Page Fault (2 is already in memory)
12. Reference 3 → Page Fault (Replace 6, Load 3)
13. Reference 7 → Page Fault (Replace 1, Load 7)
14. Reference 6 → Page Fault (Replace 2, Load 6)
15. Reference 3 → No Page Fault (3 is already in memory)
16. Reference 2 → Page Fault (Replace 7, Load 2)
17. Reference 1 → Page Fault (Replace 3, Load 1)
18. Reference 2 → No Page Fault (2 is already in memory)
19. Reference 3 → Page Fault (Replace 6, Load 3)
• Total Page Faults (FIFO): 16

Q5) Attempt Any One of the following: [1 × 3 = 3]

a) Differentiate between client-server and peer-to-peer computing environments.


Aspect Client-Server Computing Peer-to-Peer Computing

Centralized (Server provides Decentralized (Peers both provide and


Architecture
resources, clients request them). consume resources).

Clients send requests to the server; Peers communicate directly with each
Communication
server responds. other without a central server.

Resource The server manages resources and Each peer manages its own resources
Management enforces policies. and may share them.

Servers may become bottlenecks as Scalable as each peer contributes


Scalability
the number of clients grows. resources.

Web services, email servers, File sharing networks, instant messaging


Examples
databases. systems.

b) Describe segmentation in detail.

Segmentation is a memory management scheme that divides a program into segments based
on logical divisions such as functions, arrays, and data structures. Unlike paging, which
divides memory into fixed-size blocks, segmentation divides memory into variable-sized
segments, providing more flexibility.

Key Aspects of Segmentation:

1. Segments: Each program is divided into multiple segments such as:


o Code Segment (contains executable instructions)
o Data Segment (contains global variables)
o Stack Segment (used for function calls and local variables)
o Heap Segment (dynamically allocated memory)
2. Logical Addressing: In segmentation, the address of an instruction or data is specified as a
combination of a segment number and an offset. For
Define bootstrapping.

Bootstrapping is the process of starting up a computer system, typically referring to the initial
loading of the operating system into memory when the computer is turned on or rebooted. It
involves loading a small program (the bootstrap loader) that then loads the larger operating
system into the system's memory.

b) Explain POSIX pthread.

POSIX (Portable Operating System Interface) pthread refers to a set of APIs (Application
Programming Interfaces) defined by the IEEE standard for thread management. It provides a
standard interface for creating and managing threads in a program. The pthread library
allows for multithreading in C/C++ programs, where threads are used to execute tasks
concurrently, improving performance, especially on multi-core systems.

c) What is the role of a dispatcher?

The dispatcher is part of the operating system's scheduler and is responsible for managing the
execution of processes or threads. When the scheduler selects a process from the ready queue
to run, the dispatcher takes over to load the process into the CPU, switching context if
necessary, and ensuring that the process is executed. The dispatcher manages context
switching and ensures that control is passed between processes efficiently.

d) List the solutions to the critical section problem.

The critical section problem involves managing access to shared resources in concurrent
programming to avoid conflicts. Solutions include:

1. Lock-based solutions: Using mutexes or semaphores to ensure mutual exclusion.


2. Software-based solutions: Peterson’s algorithm, Lamport’s Bakery algorithm.
3. Hardware-based solutions: Using atomic instructions for synchronization.
4. Monitors: High-level synchronization primitives that encapsulate shared data and
operations.

e) What do you mean by page hit?

A page hit occurs when a requested page is found in the system's main memory (RAM),
rather than having to be fetched from secondary storage (like a disk). It implies that the
requested data is already in memory and can be accessed quickly.
f) What is kernel?

The kernel is the core part of an operating system. It is responsible for managing system
resources such as memory, CPU, and peripheral devices. It provides an interface between
hardware and software, ensuring that processes, files, and devices are managed properly and
efficiently.

g) What is ready queue?

The ready queue is a data structure used by the operating system to store processes that are
ready to execute but are waiting for CPU time. These processes are in a state where they are
fully prepared to run, but the CPU is currently occupied with other tasks. The ready queue is
managed by the process scheduler.

h) What do you mean by I/O bound process?

An I/O bound process is a process that spends more time performing input/output operations
(such as reading from or writing to disk, or interacting with peripheral devices) than
executing computations. These processes are typically limited by the speed of the I/O devices
rather than the speed of the CPU.

i) What are the two types of semaphores?

The two types of semaphores are:

1. Binary Semaphore (or Mutex): This semaphore can only take two values, 0 or 1,
and is used for mutual exclusion to ensure that only one process can access a shared
resource at a time.
2. Counting Semaphore: This semaphore can take any non-negative integer value and
is used to manage access to a finite number of identical resources, allowing multiple
processes to access them concurrently up to a certain limit.

j) What is virtual memory?

Virtual memory is a memory management technique used by modern operating systems to


extend the available memory beyond the physical RAM by using disk space. It allows a
computer to compensate for physical memory shortages, providing the illusion to processes
that they have access to a large, contiguous block of memory, even if the actual physical
memory is fragmented or limited.

a) What is a system call? Explain system call related to device manipulation.


A system call is a mechanism that allows user-level applications to request services or
resources from the operating system's kernel. It acts as an interface between user applications
and the operating system, enabling tasks such as file management, process control, and
device manipulation.

System call related to device manipulation involves operations that control or interact with
hardware devices (like disk drives, printers, network interfaces, etc.). Some common system
calls related to device manipulation include:

• open(): Used to open a device file.


• read() and write(): Used to read from and write to device files, respectively.
• ioctl(): Used to control device-specific operations.
• close(): Used to close a device after completing the operations.

These system calls allow processes to access and manipulate hardware devices in a controlled
and secure manner.

b) Write a short note on multilevel queue scheduling.

Multilevel Queue Scheduling is a CPU scheduling algorithm in which processes are divided
into different priority queues based on their characteristics. Each queue has its own
scheduling algorithm, and the queues are often prioritized.

The basic idea is to categorize processes into different levels based on factors like priority,
process type, or memory requirements, and then apply different scheduling policies to each
queue. For example:

• High-priority queue: Might use Round Robin (RR) scheduling for interactive
processes.
• Low-priority queue: Might use First Come, First Serve (FCFS) or Shortest Job First
(SJF) for batch jobs.

A process is assigned to a queue based on its type, and once it enters a queue, it is scheduled
according to the algorithm defined for that queue. Processes can sometimes move between
queues based on their behavior, e.g., aging to prevent starvation in lower-priority queues.

c) Explain the producer-consumer problem.

The Producer-Consumer Problem is a classic synchronization problem that involves two


processes: the producer and the consumer.

• Producer: This process generates data or resources and puts them into a shared buffer
or queue.
• Consumer: This process takes data or resources from the shared buffer and consumes
them.
The main challenge in this problem is ensuring that the producer does not try to add data to a
full buffer and the consumer does not try to consume from an empty buffer. Synchronization
mechanisms like semaphores or mutexes are typically used to solve this issue.

A common solution is to use a buffer with a defined size and two operations:

• Produce: Adds an item to the buffer.


• Consume: Removes an item from the buffer.

Proper synchronization is essential to prevent race conditions, such as simultaneous access to


the buffer by both processes.

d) Explain paging in brief.

Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. In paging, both physical and logical memory are divided into fixed-size
blocks, called pages (in logical memory) and frames (in physical memory).

The main idea of paging is to divide a program into small, manageable pieces (pages) and
load these pages into available memory frames. This allows non-contiguous allocation of
physical memory and helps to reduce fragmentation.

Key points:

• Page Table: Each process has a page table, which keeps track of where its pages are
stored in physical memory.
• Logical Address: A logical address is divided into a page number and a page offset.
• Physical Address: A physical address is divided into a frame number and a frame
offset.

Paging helps with memory efficiency and simplifies memory management in modern
operating systems.

e) Write the difference between preemptive and non-preemptive scheduling.

Preemptive Scheduling Non-Preemptive Scheduling


In preemptive scheduling, a process can be In non-preemptive scheduling, once a
interrupted and moved to a ready queue if a process starts executing, it cannot be
higher priority process arrives or the running interrupted until it finishes or voluntarily
process exceeds its time slice. yields control.
Commonly used algorithms: First Come
Commonly used algorithms: Round Robin,
First Serve (FCFS), Shortest Job First
Shortest Remaining Time First.
(SJF).
More responsive to real-time requirements, Simpler, but can lead to process starvation
suitable for time-sharing systems. and less system responsiveness.
Preemptive Scheduling Non-Preemptive Scheduling
Can lead to higher overhead due to frequent
Fewer context switches, lower overhead.
context switches.
Examples: Multitasking systems, real-time Examples: Batch systems, systems where
systems. processes don't need frequent interruption.

Preemptive scheduling ensures that critical processes get CPU time promptly, whereas non-
preemptive scheduling relies on the processes to yield control when appropriate.

(a) What is a thread? Explain any 2 multithreading models in brief with


diagrams.

Thread:

A thread is the smallest unit of execution within a process. A thread shares the same memory
space within a process and can execute independently. Threads allow a process to perform
multiple tasks simultaneously or in parallel, improving the overall performance of an
application.

Multithreading Models:

1. Many-to-One Model: In the Many-to-One model, multiple user-level threads are


mapped to a single kernel thread. This means that all the threads in a process share the
same thread of execution in the kernel. The kernel is unaware of the existence of
multiple threads.
o Advantages: Easier to implement; low overhead.
o Disadvantages: If one thread performs a blocking system call, the entire process
gets blocked, affecting the performance of other threads.

Diagram:

lua
Copy code
+-------------------+
| Process P |
| |
| +------------+ |
| | Thread 1 | |
| +------------+ |
| +------------+ |
| | Thread 2 | |
| +------------+ |
| +------------+ |
| | Thread 3 | |
| +------------+ |
+-------------------+
|
+-----v-----+
| Kernel |
| (1 Thread)|
+-----------+
2. One-to-One Model: In the One-to-One model, each user-level thread is mapped to a
kernel thread. The kernel is aware of all the threads and can schedule them
independently. This model allows true parallelism as each thread can be executed on a
separate processor.
o Advantages: Improved performance; blocking in one thread does not affect others.
o Disadvantages: More overhead due to the management of multiple threads by the
kernel.

Diagram:

lua
Copy code
+-------------------+
| Process P |
| |
| +------------+ |
| | Thread 1 | |
| +------------+ |
| +------------+ |
| | Thread 2 | |
| +------------+ |
| +------------+ |
| | Thread 3 | |
| +------------+ |
+-------------------+
| | |
+---v---v---v---+
| Kernel Threads|
| (Thread 1) |
| (Thread 2) |
| (Thread 3) |
+----------------+

(b) Write a short note on logical address and physical address binding with
diagram.

Logical Address (Virtual Address):

A logical address is an address generated by the CPU during program execution. These
addresses are used by the program and are independent of the physical memory (RAM). The
logical address is translated into a physical address by the operating system.

Physical Address:

A physical address refers to the actual location in the computer’s memory (RAM) where
data is stored. The operating system is responsible for mapping logical addresses to physical
addresses during program execution.

Binding of Logical and Physical Addresses:

The process of mapping logical addresses to physical addresses is known as address


binding. This binding occurs in three stages:
1. Compile-time Binding: If the starting address is known at compile time, the compiler
generates absolute addresses, and no further translation is needed.
2. Load-time Binding: If the starting address is not known at compile time, the address is
determined during loading.
3. Execution-time Binding: If memory addresses are not determined until the program is
running, the operating system dynamically maps logical addresses to physical addresses
using a memory management unit (MMU).

Diagram:
lua
Copy code
+----------------------+
| Logical Address Space|
| (Virtual Address) |
| (e.g., 0, 1, 2...) |
+----------------------+
|
+-----v-----+
| MMU | <--- Translation via Paging/Segmentation
+-----+-----+
|
+--------v---------+
| Physical Address |
| (Actual Memory) |
| (RAM location) |
+------------------+

(c) Consider the following set of processes with the length of CPU burst time
and arrival time given in milliseconds. Calculate waiting time, turnaround
time per each process. Also, calculate the average waiting time and average
turnaround time using preemptive priority scheduling.

Given Data:
Process Burst Time Arrival Time Priority

P1 14 4 3

P2 52 1

P3 69 2

P4 55 3

P5 90 4

We will calculate the waiting time and turnaround time for each process under preemptive
priority scheduling. In preemptive priority scheduling, the process with the highest priority
(lowest number) is executed first, and if another process with a higher priority arrives, the
current running process is preempted and placed back into the ready queue.
Steps to Solve:

1. Sort the processes by their arrival times.


2. Start with the process with the highest priority and execute it for the shortest burst time or
until preempted by another process.
3. Calculate waiting time (Time in ready queue) and turnaround time (Total time from arrival
to completion) for each process.
4. Calculate average waiting time and average turnaround time.

Let's begin with an initial process chart, considering priority scheduling and arrival times.

Preemptive Priority Scheduling Steps:

Given that the table includes the priority of only P1 as 3, we will assume that the priorities
for other processes (P2, P3, P4, P5) are numerical and deduced by a reasonable assumption
(lower value means higher priority). If no priorities are provided, we can assume they are
assigned in the order of their arrival.

The detailed calculations for waiting time and turnaround time will require simulating the
scheduling process step by step, which I'll solve using computational methods. Would you
like me to proceed with the full calculation for this problem?

4o mini

Q4) Attempt any Two of the following. [2 × 4 = 8]

a) Define process. Explain process state diagram in brief.

Process Definition: A process is a program in execution. It is an active entity with its own
memory space and resources, which is being executed by the CPU. A process can exist in
various states during its lifetime, including ready, running, waiting, etc.

Process State Diagram: A process can be in one of the following states during its execution:

1. New: The process is being created.


2. Ready: The process is ready to execute but waiting for CPU time.
3. Running: The process is currently being executed by the CPU.
4. Waiting (Blocked): The process is waiting for some event (e.g., I/O operation) to complete.
5. Terminated (Exit): The process has finished execution and is terminated.

State Transitions:

• New → Ready: Process moves from creation to being ready for execution.
• Ready → Running: The process gets CPU time and starts running.
• Running → Waiting: The process requires waiting (e.g., for I/O).
• Waiting → Ready: Once waiting is over, the process returns to ready state.
• Running → Terminated: The process has finished its execution.

This state diagram can be visualized as a circular transition, where processes cycle through
various states based on the system’s scheduler.

b) Explain reader-writer problem in brief.

Reader-Writer Problem is a classic synchronization problem that deals with ensuring that
multiple processes can access a shared resource concurrently, such as a database, but in a way
that avoids conflicts.

• Readers: Multiple processes that only need to read from the resource.
• Writers: Processes that need to modify the resource.

Key Points:

1. Readers can share the resource: If no writer is writing, multiple readers can read the
resource simultaneously without interference.
2. Writer exclusivity: A writer needs exclusive access to the resource, meaning no readers or
other writers can access it at the same time.
3. Priority Conflicts: If there are many readers and a writer wants to write, the writer may
need to wait until all readers finish, or vice versa.

There are typically two variations:

• First reader-writer problem: It favors readers. If there are readers reading, new readers can
join without waiting for writers to finish.
• Second reader-writer problem: It favors writers. Writers are given priority to avoid
starvation (i.e., writers waiting indefinitely because of readers continuously arriving).

Synchronization Mechanism: Semaphores or mutexes are used to ensure that readers and
writers do not conflict.

c) Consider a reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4. No. of frames = 3. Find out the number of


page faults using:
i) LRU (Least Recently Used) Algorithm:

Page Faults:

• 3 → Page fault. (Page frames: 3)


• 2 → Page fault. (Page frames: 3, 2)
• 1 → Page fault. (Page frames: 3, 2, 1)
• 0 → Page fault. (Evict 3) (Page frames: 0, 2, 1)
• 3 → Page fault. (Evict 2) (Page frames: 0, 3, 1)
• 2 → Page fault. (Evict 1) (Page frames: 0, 3, 2)
• 4 → Page fault. (Evict 0) (Page frames: 4, 3, 2)
• 3 → No page fault. (Page frames: 4, 3, 2)
• 2 → No page fault. (Page frames: 4, 3, 2)
• 1 → Page fault. (Evict 4) (Page frames: 1, 3, 2)
• 0 → Page fault. (Evict 3) (Page frames: 1, 0, 2)
• 4 → Page fault. (Evict 2) (Page frames: 1, 0, 4)

Total Page Faults (LRU): 9

ii) OPT (Optimal) Algorithm:

Page Faults:

• 3 → Page fault. (Page frames: 3)


• 2 → Page fault. (Page frames: 3, 2)
• 1 → Page fault. (Page frames: 3, 2, 1)
• 0 → Page fault. (Evict 3) (Page frames: 0, 2, 1)
• 3 → Page fault. (Evict 1) (Page frames: 0, 3, 2)
• 2 → No page fault. (Page frames: 0, 3, 2)
• 4 → Page fault. (Evict 0) (Page frames: 4, 3, 2)
• 3 → No page fault. (Page frames: 4, 3, 2)
• 2 → No page fault. (Page frames: 4, 3, 2)
• 1 → Page fault. (Evict 4) (Page frames: 1, 3, 2)
• 0 → Page fault. (Evict 2) (Page frames: 1, 3, 0)
• 4 → Page fault. (Evict 3) (Page frames: 1, 4, 0)

Total Page Faults (OPT): 7

Q5) Attempt any One of the following. [1 × 3 = 3]

a) Explain layered operating system in brief with diagram.

A Layered Operating System is designed by dividing the OS into multiple layers, each
performing a specific task. The layers interact with each other, with higher layers interacting
only with adjacent lower layers.

• Layer 0 (Hardware): The actual physical hardware.


• Layer 1 (Kernel): Basic system services like process management, memory management, I/O
management.
• Layer 2 (System Call Interface): Handles communication between the kernel and user
programs.
• Layer 3 (User Applications): The topmost layer where user applications run.

Advantages:

• Easier to maintain and develop.


• Each layer can be modified independently.
• Increases security and modularity.
Diagram:

sql
Copy code
+----------------------+
| User Applications | (Layer 3)
+----------------------+
| System Call Interface| (Layer 2)
+----------------------+
| Kernel | (Layer 1)
+----------------------+
| Hardware | (Layer 0)
+----------------------+

b) Explain first fit, best fit, worst fit, next fit algorithm.

These are memory allocation strategies used to allocate free memory blocks to processes.

1. First Fit:
o Allocates the first block of memory that is large enough to fit the process.
o Pros: Fast because it searches for a suitable block from the beginning.
o Cons: Can lead to fragmentation (wasted space).
2. Best Fit:
o Allocates the smallest available block that is large enough to accommodate the
process.
o Pros: Minimizes wasted memory.
o Cons: Slower due to needing to search through all blocks.
3. Worst Fit:
o Allocates the largest available block, leaving as much space as possible for other
processes.
o Pros: Minimizes the likelihood of fragmentation.
o Cons: Can result in inefficient use of space.
4. Next Fit:
o Similar to First Fit, but instead of starting from the beginning each time, it starts
from the point of the last allocation.
o Pros: Faster than First Fit in some cases.
o Cons: Can lead to fragmentation if the process does not find a suitable block near
the end of the memory.

These algorithms are used to manage memory allocation in operating systems and can affect
the performance and fragmentation of memory over time.

4o mini

You might also like