0% found this document useful (0 votes)
7 views16 pages

Os QB Final

The document discusses various concepts in operating systems, including process scheduling, the producer-consumer problem, inter-process communication (IPC), mutual exclusion, and synchronization techniques. It explains the importance of these concepts, their mechanisms, and algorithms such as FCFS, Peterson’s solution, and strict alternation. Additionally, it compares synchronization techniques like semaphores, monitors, and message passing in terms of ease of use, performance, deadlock prevention, and scalability.

Uploaded by

Rudram Kshatri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views16 pages

Os QB Final

The document discusses various concepts in operating systems, including process scheduling, the producer-consumer problem, inter-process communication (IPC), mutual exclusion, and synchronization techniques. It explains the importance of these concepts, their mechanisms, and algorithms such as FCFS, Peterson’s solution, and strict alternation. Additionally, it compares synchronization techniques like semaphores, monitors, and message passing in terms of ease of use, performance, deadlock prevention, and scalability.

Uploaded by

Rudram Kshatri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

QUESTION BANK ANSWERS

Q.1. Define Process Scheduling in Operating Systems, explain its importance and Types.

Process Scheduling refers to the way in which the operating system decides which process gets to use the CPU
at any given time. It is a crucial part of an operating system as it ensures efficient utilization of CPU resources
and proper management of processes.

Importance:

1. Efficient CPU Utilization: Proper scheduling ensures that the CPU is never idle when processes are
ready to execute.
2. Fairness: Scheduling helps in distributing CPU time fairly between all processes, preventing starvation.
3. Maximizing Throughput: It aims to maximize the number of processes completed within a given time.
4. Minimizing Response Time: Scheduling algorithms help in improving the response time for interactive
systems.

Types of Scheduling:

1. Preemptive Scheduling: In this, the CPU can be taken away from a running process and assigned to
another process. Examples include Round Robin and Priority Scheduling.
2. Non-Preemptive Scheduling: Once a process starts execution, it runs until it finishes or voluntarily
relinquishes the CPU. Examples include First-Come, First-Served (FCFS) and Shortest Job Next (SJN).

Q.2. Explain the FCFS scheduling algorithm. What is the disadvantage of this scheduling algorithm?

FCFS (First-Come, First-Served) is a non-preemptive scheduling algorithm where processes are executed in
the order they arrive in the ready queue. The first process that arrives gets executed first, and so on.

Working:

 The CPU is allocated to the process that has been in the ready queue the longest.
 No process is interrupted; it runs to completion once it starts.

Disadvantages:

1. Convoy Effect: If a long process arrives first, it can delay the execution of shorter processes that follow.
This leads to poor average waiting time.
2. Non-Optimal: FCFS doesn’t consider the burst time of processes, which can result in long waiting
times for shorter processes.
3. Inefficient for Time-Sharing Systems: FCFS is not suitable for interactive systems where fast
responses are needed.
Q.3. Discuss the Producer Consumer Problem. How does it introduce data inconsistency with
synchronization?

The Producer-Consumer Problem involves two processes, a producer, and a consumer, sharing a common
buffer. The producer generates data and stores it in the buffer, while the consumer takes data from the buffer for
processing.

Problem:

 The producer and consumer must be synchronized to avoid issues like overwriting data or attempting to
consume data when none is available.
 If there is no synchronization, data inconsistency can arise, where the consumer might try to consume
data that has not been produced yet, or the producer might overwrite data before it has been consumed.
 Race Conditions: Without proper synchronization (such as mutex locks), both the producer and
consumer could modify the buffer simultaneously, causing data corruption.

Synchronization Mechanisms:

 Semaphores: Used to manage access to the shared buffer.


 Mutex Locks: Ensure that only one process can access the buffer at a time.
 Condition Variables: Allow synchronization of processes when the buffer is full or empty.

Q.4. Define Inter-Process Communication (IPC) and explain its importance in a multi-process system.

Inter-Process Communication (IPC) refers to the mechanisms that allow processes to communicate with each
other and share data, whether they are running on the same machine or on different machines over a network.

Importance:

1. Data Sharing: IPC allows processes to exchange data and results, which is critical in complex
applications where tasks need to be distributed across multiple processes.
2. Coordination: It ensures that processes work together, for example, in producer-consumer problems or
client-server architectures.
3. Synchronization: IPC provides ways for processes to synchronize their actions, ensuring that resources
are used efficiently without conflicts.
4. Resource Management: It helps in managing shared resources, such as memory or files, without
conflicts.

Common IPC Methods:

 Message Passing: Processes send and receive messages to exchange data.


 Shared Memory: Multiple processes access a common memory space for data exchange.
 Pipes/FIFOs: A one-way communication channel between processes.
Q.5. What is Mutual Exclusion, and how does it relate to the Critical Section Problem?.

Mutual Exclusion is a concept in which multiple processes are prevented from simultaneously executing a
critical section of code that accesses shared resources. Only one process can execute the critical section at any
given time, ensuring data integrity.

Critical Section Problem:

 This problem arises when multiple processes share resources and access them concurrently. Without
proper mutual exclusion, race conditions and inconsistent data can occur.
 The critical section problem asks how to design a system in which multiple processes can safely share
resources without violating the constraints of mutual exclusion.

Solution Criteria:

1. Mutual Exclusion: Only one process can be in the critical section at a time.
2. Progress: If no process is in the critical section, the decision of which process will enter next should be
made quickly and fairly.
3. Bounded Waiting: A process should not have to wait indefinitely to enter the critical section.

Mechanisms for Mutual Exclusion:

 Locks (Mutexes): Ensure that only one process can enter the critical section at a time.
 Semaphores: Can be used to control access to shared resources.
 Monitors: High-level synchronization primitives that combine mutual exclusion with condition
synchronization.

By using mutual exclusion and solving the critical section problem, an operating system can ensure data
consistency and prevent conflicts among processes.

Q. 6. Explain Peterson’s Solution to the Critical Section Problem.

Peterson’s Solution is a classic algorithm for achieving mutual exclusion in the critical section problem. It was
designed for two processes and guarantees that only one of them can enter the critical section at a time, thus
avoiding conflicts and race conditions.

Working:

 Shared variables:
o flag[2]: An array of two boolean values, one for each process, indicating whether the process is
interested in entering the critical section.
o turn: A variable that determines which process has the turn to enter the critical section.
 Algorithm: Each process, before entering the critical section, sets its flag to true (indicating it wants to
enter), and then assigns turn to the other process to give it a chance to enter the critical section. A
process will only enter the critical section if:
1. The other process is not interested (flag[other] is false).
2. It is its turn (turn == thisProcess).
This ensures mutual exclusion because if both processes try to enter the critical section at the same time,
the one that is not its turn will be blocked.

Advantages:

 It’s simple and elegant.


 It guarantees mutual exclusion and avoids deadlock.

Disadvantages:

 It’s limited to two processes.


 It can be inefficient for large numbers of processes due to the busy waiting (polling).

Q.7. Explain Strict Alternation Solution for the Critical Section Problem.

Strict Alternation is a simple solution to the critical section problem that alternates between two processes.
One process enters the critical section while the other waits, and then they swap after each iteration.

Working:

 A shared variable (e.g., turn) is used to control the alternation between the two processes.
 Each process will only enter the critical section when it is its turn according to the turn variable.
 The process entering the critical section sets turn to the other process when it exits the critical section.

Algorithm:

 Process 0:
o Wait until turn == 0.
o Enter critical section.
o Set turn = 1.
 Process 1:
o Wait until turn == 1.
o Enter critical section.
o Set turn = 0.

Disadvantages:

 It suffers from inefficiency because if one process finishes quickly and the other takes a long time, the
system will waste CPU time waiting for the alternate turn.
 It doesn’t handle more than two processes.

Q.8. Explain following terms:1.Semaphores 2. Event Counters 3. Monitors in IPC 4. Message Passing in IPC

 Semaphores:

 A semaphore is a synchronization primitive used to control access to a shared resource. It consists of a


variable that is modified atomically.
 There are two types:
o Binary Semaphore: Used for mutual exclusion (0 or 1).
o Counting Semaphore: Used to control access to a pool of resources (integer values).

Operations:

 Wait (P): Decreases the semaphore value, and if it is negative, the process is blocked.
 Signal (V): Increases the semaphore value, and if the value is zero or negative, it unblocks a waiting
process.

 Event Counters:

 An event counter is used in synchronization to count events and control the order of execution of
processes.
 Often used in systems that require ordering or counting of events, such as managing resource allocation
or coordinating multiple processes.

 Monitors in IPC:

 A monitor is a high-level synchronization construct that provides mutual exclusion and condition
synchronization. A monitor consists of:
o A data structure shared by multiple processes.
o A set of operations that can be executed on the shared data, but only one process can execute at a
time.
o Condition variables to allow processes to wait for certain conditions.
 It simplifies complex synchronization issues by encapsulating data and operations.

 Message Passing in IPC:

 Message Passing is an IPC technique where processes communicate by sending and receiving
messages. It can occur between processes on the same machine or across different machines in a
distributed system.
 Common methods of message passing include synchronous (blocking) or asynchronous (non-
blocking) communication.
 It provides a simple and effective method of communication but can incur overhead due to copying data
between processes.

Q.9 What is the Critical Section in Inter-Process Communication (IPC)?

 The Critical Section refers to a section of code or set of operations in which a process accesses shared
resources that must not be concurrently accessed by multiple processes.
 The critical section problem arises when multiple processes attempt to access shared resources
simultaneously, potentially leading to inconsistent or corrupted data.
 In IPC, the critical section problem is addressed through synchronization mechanisms like semaphores,
mutexes, or condition variables, ensuring that only one process enters the critical section at a time.
Proper handling of critical sections is crucial for data consistency and system stability.

Q.10 Compare and contrast different synchronization techniques (semaphores, monitors, message
passing) with respect to:

 Ease of use
 Performance
 Deadlock prevention
 Scalability in multi-core systems

1. Ease of use:

 Semaphores: Low-level synchronization primitive that can be difficult to use correctly. Requires careful
handling of wait and signal operations.
 Monitors: Higher-level abstraction than semaphores, making them easier to use because they combine
synchronization with encapsulation of data. They provide built-in mechanisms for mutual exclusion and
condition synchronization.
 Message Passing: Often simpler for distributed systems since it does not require shared memory but
may be more complex for managing communication protocols and ensuring data consistency.

2. Performance:

 Semaphores: Typically efficient in simple, low-level synchronization but may cause overhead with
busy waiting or excessive context switching.
 Monitors: Generally good performance, as they are high-level constructs designed to reduce the
complexity of managing synchronization manually.
 Message Passing: Often incurs higher overhead compared to shared memory-based solutions due to the
need to copy data between processes, especially in distributed systems.

3. Deadlock prevention:

 Semaphores: Susceptible to deadlock if not used carefully (e.g., circular waits). Additional measures
must be taken to prevent this.
 Monitors: Less prone to deadlock since the monitor constructs are designed to manage the execution
flow more clearly. However, improper use of condition variables can still lead to deadlocks.
 Message Passing: Deadlock is less likely as there is no shared memory and communication is done via
messages. However, care must be taken to avoid issues such as circular dependencies in message
exchanges.

4. Scalability in multi-core systems:

 Semaphores: Works well in multi-core systems, especially when the number of processes is relatively
small. However, contention for semaphores may degrade performance as the system scales.
 Monitors: Scales better in multi-core systems because the higher-level abstraction simplifies
synchronization. However, there may still be contention for locks, affecting performance.
 Message Passing: Highly scalable, especially for distributed systems. However, the performance may
suffer if processes are closely coupled or there is heavy communication.

Q.11. How do semaphores help in solving the Producer-Consumer problem, and what is the role of the
buffer in this solution?

In the Producer-Consumer Problem, semaphores help synchronize the producer and consumer processes to
ensure that data is produced and consumed correctly without data inconsistency or conflicts.

Semaphores in Producer-Consumer:
1. Binary Semaphore (Mutex): Ensures mutual exclusion to the shared buffer. Only one process (either
producer or consumer) can access the buffer at a time.
2. Counting Semaphores:
o Empty: Keeps track of the empty slots in the buffer. It is initialized to the total capacity of the
buffer.
o Full: Tracks the number of filled slots in the buffer. It is initialized to 0.

Working:

 Producer:
o Wait on the Empty semaphore (decrement), indicating that there is space in the buffer.
o Enter the critical section (access the buffer) to add an item.
o Signal the Full semaphore (increment), indicating that a new item is produced.
 Consumer:
o Wait on the Full semaphore (decrement), indicating that there is at least one item to consume.
o Enter the critical section (access the buffer) to consume an item.
o Signal the Empty semaphore (increment), indicating that there is space for a new item.

The buffer serves as the shared resource between the producer and the consumer. It holds the items that are
produced and consumed, and semaphores ensure that no process accesses the buffer at the wrong time,
preventing race conditions and ensuring smooth operation.

Q.12. How can message passing be used to solve the Producer-Consumer problem?

Message Passing can be used to solve the Producer-Consumer problem by facilitating communication between
the producer and consumer processes through the exchange of messages instead of directly sharing a buffer.

Steps:

1. Producer sends a message to the consumer, containing the data (item) it wants to produce. The
producer can wait for an acknowledgment from the consumer indicating that the message has been
received.
2. Consumer receives the message, processes the item, and may send an acknowledgment back to the
producer.
3. Both processes use a message queue or mailbox system, which acts as a medium for communication.
The message queue can act as a buffer, holding messages produced by the producer until the consumer
can process them.
4. Synchronization: The producer waits if the message queue is full, and the consumer waits if the queue
is empty. Semaphores or other synchronization mechanisms can be used to manage waiting conditions
and ensure that the message queue is accessed safely.

Advantages:

 No shared memory: Unlike traditional producer-consumer solutions using a shared buffer, message
passing avoids issues of memory access conflicts.
 Decoupling: The producer and consumer processes do not need to run on the same machine or be tightly
coupled, which is particularly useful in distributed systems.

Challenges:

 Overhead in message passing (especially in distributed systems).


 Need for managing the message queue and synchronization.

Q.13. Explain Peterson's solution to the critical section problem and highlight how it differs from the
strict alternation approach

Peterson’s Solution is a well-known mutual exclusion algorithm that solves the critical section problem for two
processes. It uses two shared variables:

 flag[2]: A boolean array where each index represents whether the respective process wants to enter the
critical section.
 turn: A variable that indicates whose turn it is to enter the critical section.

Working:

 Both processes set their flag to true, indicating that they want to enter the critical section.
 Each process then sets turn to the other process, allowing the other process to enter the critical section
if it’s their turn.
 A process only enters the critical section if either the other process isn't interested or it’s its turn
according to the turn variable.

This solution guarantees mutual exclusion because:

 If both processes want to enter, only one will succeed based on the value of the turn variable.
 There is no deadlock because a process cannot be blocked indefinitely. It either gets its turn or the other
process completes execution.

Differences from Strict Alternation:

 Strict Alternation uses a single variable turn to alternate between two processes, but without
considering whether both processes want to enter the critical section.
o Strict Alternation: Forces processes to alternate, regardless of whether one process is done or
not.
o Peterson’s Solution: More flexible, allowing a process to pass control to the other process only
when necessary (if the other process also wants to enter).

Advantages of Peterson’s Solution:

 More efficient in cases where one process doesn't need to continuously alternate if it’s done quickly
(prevents unnecessary waiting).
 More general than strict alternation, as it can handle both processes wanting to enter the critical section.

Q.14. Perform the shortest job first (SJF) pre-emptive scheduling based on following data:
Calculate the waiting time and turnaround time for individual processes. Also find out the average values of all
the waiting and turnaround time.

Steps for Shortest Job First (Pre-emptive) Scheduling:

1. At time 0: Only P1 is available, so it starts executing.


2. At time 2: P2 arrives. The remaining burst time of P1 is 5, and P2’s burst time is 4. Since P2 has a shorter burst
time, it preempts P1 and starts executing.
3. At time 4: P3 arrives, and it has a burst time of 1, which is shorter than P2's remaining burst time of 2. So, P3
preempts P2 and starts executing.
4. At time 5: P4 arrives, and it has a burst time of 4. P3 finishes its execution at time 5, and since it is the shortest,
it finishes first.
5. At time 5: Now, P1, P2, and P4 are remaining, but P2 has a shorter burst time (2 remaining), so P2 continues
execution.
6. At time 7: P2 completes execution. Now, the shortest burst time is for P4 (4), so P4 starts executing.
7. At time 11: P4 finishes its execution, leaving P1 (with 5 remaining) to execute next.
8. At time 16: P1 finishes execution.

Execution Order:

P1 → P2 → P3 → P2 → P4 → P1

Now, let's calculate the Waiting Time and Turnaround Time for each process:

Completion Times (CT):

 P1: Completes at time 16.


 P2: Completes at time 7.
 P3: Completes at time 5.
 P4: Completes at time 11.

Turnaround Times (TAT):

TAT=Completion Time−Arrival Time\text{TAT} = \text{Completion Time} - \text{Arrival


Time}TAT=Completion Time−Arrival Time

 P1: TAT = 16 - 0 = 16
 P2: TAT = 7 - 2 = 5
 P3: TAT = 5 - 4 = 1
 P4: TAT = 11 - 5 = 6
Waiting Times (WT):

WT=Turnaround Time−Burst Time\text{WT} = \text{Turnaround Time} - \text{Burst


Time}WT=Turnaround Time−Burst Time

 P1: WT = 16 - 7 = 9
 P2: WT = 5 - 4 = 1
 P3: WT = 1 - 1 = 0
 P4: WT = 6 - 4 = 2

Average Waiting Time:

Average WT=9+1+0+24=124=3\text{Average WT} = \frac{9 + 1 + 0 + 2}{4} = \frac{12}{4} =


3Average WT=49+1+0+2=412=3

Average Turnaround Time:

Average TAT=16+5+1+64=284=7\text{Average TAT} = \frac{16 + 5 + 1 + 6}{4} = \frac{28}{4} =


7Average TAT=416+5+1+6=428=7

Final Results:

 Waiting Time for P1: 9


 Waiting Time for P2: 1
 Waiting Time for P3: 0
 Waiting Time for P4: 2
 Average Waiting Time: 3
 Turnaround Time for P1: 16
 Turnaround Time for P2: 5
 Turnaround Time for P3: 1
 Turnaround Time for P4: 6
 Average Turnaround Time: 7

Q.15. What are the differences between a process and a thread?

A process and a thread are both fundamental concepts in operating systems, but they differ in terms of
structure and execution.

Differences:

1. Definition:
o A process is an independent program in execution that has its own address space, code, data, and
resources.
o A thread is the smallest unit of execution within a process. Threads within the same process
share the same address space and resources.
2. Memory and Resources:
o Processes have their own memory space and resources (such as file descriptors).
o Threads share the same memory space, file descriptors, and resources within the same process.
3. Overhead:
o Processes are heavier because they require more system resources, including their own memory
space.
o Threads are lightweight, requiring less overhead as they share memory and resources.
4. Communication:
o Inter-process communication (IPC) between processes is more complex and slower.
o Threads can communicate with each other more efficiently since they share the same memory
space.
5. Example:
o Process: Running a browser, which involves processes like the browser window, each tab, etc.
o Thread: Each tab in a browser could be running as a separate thread within the same process.

Q.16. What is the difference between a Monolithic Kernel and a Microkernel? Provide examples.

Parameters Microkernel Monolithic kernel

In microkernel, user services In monolithic kernel, both user


Address Space and kernel services are kept in services and kernel services are kept in
separate address space. the same address space.

Design and
OS is complex to design. OS is easy to design and implement.
Implementation

Monolithic kernel is larger than


Size Microkernel are smaller in size.
microkernel.

Functionality Easier to add new functionalities. Difficult to add new functionalities.

To design a microkernel, more code is Less code when compared to


Coding
required. microkernel

Failure of one component in a


Failure of one component does not
Failure monolithic kernel leads to the failure
effect the working of micro kernel.
of the entire system.

Processing Speed Execution speed is low. Execution speed is high.

It is not easy to extend monolithic


Extend It is easy to extend Microkernel.
kernel.

Debugging Debugging is simple. Debugging is difficult.


Parameters Microkernel Monolithic kernel

Extra time and resources are needed


Maintain It is simple to maintain.
for maintenance.

Message forwarding and context Message passing and context


Message passing and
switching are required by the switching are not required while the
Context switching
microkernel. kernel is working.

The kernel only offers IPC and low- The Kernel contains all of the
Services
level device management services. operating system’s services.

Example Example : Mac OS. Example : Microsoft Windows 95.

Q.17. What is the main difference between preemptive and non-preemptive scheduling?

Q.18. Explain the concept of mutual exclusion in the context of critical sections. How does mutual
exclusion prevent race conditions?

Mutual Exclusion ensures that only one process or thread can access a critical section (shared resource) at any
given time, preventing conflicts or data corruption from concurrent accesses.

1. Critical Section: A portion of code that accesses shared resources.


2. Race Condition: A situation where two or more processes or threads access shared data concurrently
and the outcome depends on the order of execution.

How mutual exclusion prevents race conditions:

 By allowing only one process to enter the critical section at a time, mutual exclusion ensures that there
are no simultaneous accesses to shared data, preventing inconsistent or corrupt states.
 Mechanisms like locks, semaphores, and monitors are used to enforce mutual exclusion, ensuring that
only one process can execute critical section code while others wait.
Q.19. What is strict alternation in the context of process synchronization? Discuss the advantages and
disadvantages of using strict alternation for mutual exclusion.

Strict Alternation is a synchronization method where two processes take turns executing their critical sections.
The processes alternate strictly, and only one process can execute at a time.

1. Mechanism:
o One process executes its critical section, and then the other process executes its critical section.
o The processes rely on a shared variable (e.g., turn) to indicate whose turn it is.
2. Advantages:
o Simple and easy to implement for two processes.
o Guaranteed mutual exclusion, as only one process runs in the critical section at a time.
3. Disadvantages:
o Inefficient: If one process finishes quickly and the other takes a long time, strict alternation
forces the system to idle waiting for the turn.
o Non-Optimal for Real-World Scenarios: It doesn't account for processes that may need more
or less time, leading to wasted CPU cycles.

Q.20. Given the following process arrival times and burst times, calculate the Turnaround Time, Waiting
Time, and Completion Time for the processes using FCFS scheduling.

Process Arrival Time Burst Time


P1 0 5
P2 1 3
P3 2 2
P4 3 1

First-Come-First-Serve (FCFS) scheduling executes processes in the order of their arrival times.

Step-by-step Execution:

1. P1 arrives at time 0 and starts executing immediately. It runs for 5 units of time (completion at time 5).
2. P2 arrives at time 1 and starts after P1 finishes (starts at time 5). It runs for 3 units (completion at time 8).
3. P3 arrives at time 2 and starts after P2 finishes (starts at time 8). It runs for 2 units (completion at time 10).
4. P4 arrives at time 3 and starts after P3 finishes (starts at time 10). It runs for 1 unit (completion at time 11).

Completion Times:

 P1: 5
 P2: 8
 P3: 10
 P4: 11

Turnaround Times (TAT):

TAT=Completion Time−Arrival Time\text{TAT} = \text{Completion Time} - \text{Arrival


Time}TAT=Completion Time−Arrival Time
 P1: TAT = 5 - 0 = 5
 P2: TAT = 8 - 1 = 7
 P3: TAT = 10 - 2 = 8
 P4: TAT = 11 - 3 = 8

Waiting Times (WT):

WT=TAT−Burst Time\text{WT} = \text{TAT} - \text{Burst Time}WT=TAT−Burst Time

 P1: WT = 5 - 5 = 0
 P2: WT = 7 - 3 = 4
 P3: WT = 8 - 2 = 6
 P4: WT = 8 - 1 = 7

Results:

 Waiting Times: P1 = 0, P2 = 4, P3 = 6, P4 = 7
 Turnaround Times: P1 = 5, P2 = 7, P3 = 8, P4 = 8

Q.21 Given the following set of processes with their burst times, calculate the average waiting time and
average turnaround time using the Round Robin algorithm with a time quantum of 3 units.

Process Arrival Time Burst Time


P1 0 6
P2 1 8
P3 2 7
P4 3 3

Round Robin Scheduling (Time Quantum = 3):

 P1 executes for 3 units (from 0 to 3), remaining burst time = 3.


 P2 executes for 3 units (from 3 to 6), remaining burst time = 5.
 P3 executes for 3 units (from 6 to 9), remaining burst time = 4.
 P4 executes for 3 units (from 9 to 12), remaining burst time = 0 (finishes execution).
 P1 executes for 3 units (from 12 to 15), remaining burst time = 0 (finishes execution).
 P2 executes for 3 units (from 15 to 18), remaining burst time = 2.
 P3 executes for 3 units (from 18 to 21), remaining burst time = 1.
 P2 executes for 2 units (from 21 to 23), finishes execution.
 P3 executes for 1 unit (from 23 to 24), finishes execution.

Now calculate the turnaround time and waiting time for each process.

Q.22. Define the Operating System.

An Operating System (OS) is system software that manages computer hardware and software resources and provides
common services for computer programs. It acts as an intermediary between users and the computer hardware,
facilitating resource management, process scheduling, file management, and handling input/output operations.
Q.23. What are the various main functions of OS?

 Process Management: Handles processes, including creation, scheduling, and termination.


 Memory Management: Allocates and manages memory to processes.
 File Management: Manages files and directories, providing access and storage.
 Device Management: Manages device communication via drivers.
 Security and Access Control: Ensures that unauthorized users do not access the system.
 User Interface: Provides a user interface (CLI or GUI).

Q.24. Describe the structure of the Process Control Block (PCB).

The Process Control Block (PCB) contains information about a process, including:

 Process ID (PID).
 Process state (e.g., running, waiting).
 Program counter (address of the next instruction).
 CPU registers (context of the process).
 Memory management information (e.g., page tables).
 I/O status (e.g., list of open files).
 Scheduling information (e.g., priority).

Q.25. What you understand about Context Switching.

Context Switching is the process of saving and loading the state (context) of a CPU so that a process can be
suspended and another process can be resumed. It involves saving the process's state (e.g., program counter,
registers) in its PCB and loading the saved state of another process.

Q.26. What do you understand about threads? Explain the concept of multithreading in an Operating System.

A thread is a lightweight unit of execution within a process. Multiple threads within the same process share the
same memory space but have their own execution contexts (e.g., program counter, registers).

Multithreading in an operating system enables multiple threads within a process to run concurrently,
improving performance, especially in multi-core systems. Each thread can execute independently, allowing for
tasks like parallel processing and better CPU utilization.

Q. 27. Differentiate Multitasking and Multiprocessing.

Multi-tasking Multiprocessing

The execution of more than one The availability of more than one processor per system,
task simultaneously is known as that can execute several set of instructions in parallel is
multitasking. known as multiprocessing.

The number of CPU is one. The number of CPUs is more than one.

It takes moderate amount of time. It takes less time for job processing.
Multi-tasking Multiprocessing

In this, one by one job is being


In this, more than one process can be executed at a time.
executed at a time.

It is economical. It is less economical.

The number of users is more than


The number of users is can be one or more than one.
one.

Throughput is moderate. Throughput is maximum.

Its efficiency is moderate. Its efficiency is maximum.

It is of two types: Single user


It is of two types: Symmetric Multiprocessing and
multitasking and Multiple user
Asymmetric Multiprocessing.
multitasking.

You might also like